Jan 05 21:51:46 crc systemd[1]: Starting Kubernetes Kubelet... Jan 05 21:51:46 crc restorecon[4775]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:46 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:47 crc restorecon[4775]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:51:47 crc restorecon[4775]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 05 21:51:47 crc kubenswrapper[5034]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 21:51:47 crc kubenswrapper[5034]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 05 21:51:47 crc kubenswrapper[5034]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 21:51:47 crc kubenswrapper[5034]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 21:51:47 crc kubenswrapper[5034]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 05 21:51:47 crc kubenswrapper[5034]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.710752 5034 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713592 5034 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713610 5034 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713616 5034 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713621 5034 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713627 5034 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713632 5034 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713637 5034 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713641 5034 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713645 5034 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713650 5034 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713654 5034 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713658 5034 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713662 5034 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713665 5034 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713669 5034 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713672 5034 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713675 5034 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713679 5034 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713683 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713686 5034 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713701 5034 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713704 5034 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713708 5034 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713711 5034 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713715 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713718 5034 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713721 5034 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713725 5034 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713728 5034 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713733 5034 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713736 5034 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713740 5034 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713743 5034 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713747 5034 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713750 5034 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713753 5034 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713759 5034 feature_gate.go:330] unrecognized feature gate: Example Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713762 5034 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713767 5034 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713771 5034 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713776 5034 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713781 5034 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713785 5034 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713789 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713794 5034 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713797 5034 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713801 5034 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713805 5034 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713808 5034 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713812 5034 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713815 5034 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713818 5034 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713823 5034 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713826 5034 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713830 5034 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713833 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713837 5034 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713840 5034 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713844 5034 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713847 5034 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713851 5034 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713854 5034 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713857 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713861 5034 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713864 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713868 5034 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713871 5034 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713874 5034 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713878 5034 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713882 5034 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.713886 5034 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.713952 5034 flags.go:64] FLAG: --address="0.0.0.0" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.713959 5034 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.713965 5034 flags.go:64] FLAG: --anonymous-auth="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.713971 5034 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.713976 5034 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.713980 5034 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.713986 5034 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.713990 5034 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.713994 5034 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.713999 5034 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714003 5034 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714007 5034 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714011 5034 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714015 5034 flags.go:64] FLAG: --cgroup-root="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714019 5034 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714024 5034 flags.go:64] FLAG: --client-ca-file="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714027 5034 flags.go:64] FLAG: --cloud-config="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714031 5034 flags.go:64] FLAG: --cloud-provider="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714035 5034 flags.go:64] FLAG: --cluster-dns="[]" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714041 5034 flags.go:64] FLAG: --cluster-domain="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714045 5034 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714049 5034 flags.go:64] FLAG: --config-dir="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714053 5034 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714057 5034 flags.go:64] FLAG: --container-log-max-files="5" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714063 5034 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714067 5034 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714072 5034 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714093 5034 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714099 5034 flags.go:64] FLAG: --contention-profiling="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714104 5034 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714110 5034 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714115 5034 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714120 5034 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714126 5034 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714131 5034 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714135 5034 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714139 5034 flags.go:64] FLAG: --enable-load-reader="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714143 5034 flags.go:64] FLAG: --enable-server="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714147 5034 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714152 5034 flags.go:64] FLAG: --event-burst="100" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714156 5034 flags.go:64] FLAG: --event-qps="50" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714160 5034 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714164 5034 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714168 5034 flags.go:64] FLAG: --eviction-hard="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714173 5034 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714177 5034 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714181 5034 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714185 5034 flags.go:64] FLAG: --eviction-soft="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714189 5034 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714193 5034 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714197 5034 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714201 5034 flags.go:64] FLAG: --experimental-mounter-path="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714205 5034 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714209 5034 flags.go:64] FLAG: --fail-swap-on="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714213 5034 flags.go:64] FLAG: --feature-gates="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714217 5034 flags.go:64] FLAG: --file-check-frequency="20s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714222 5034 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714226 5034 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714231 5034 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714235 5034 flags.go:64] FLAG: --healthz-port="10248" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714239 5034 flags.go:64] FLAG: --help="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714242 5034 flags.go:64] FLAG: --hostname-override="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714246 5034 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714251 5034 flags.go:64] FLAG: --http-check-frequency="20s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714255 5034 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714259 5034 flags.go:64] FLAG: --image-credential-provider-config="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714263 5034 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714267 5034 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714271 5034 flags.go:64] FLAG: --image-service-endpoint="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714274 5034 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714279 5034 flags.go:64] FLAG: --kube-api-burst="100" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714283 5034 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714287 5034 flags.go:64] FLAG: --kube-api-qps="50" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714291 5034 flags.go:64] FLAG: --kube-reserved="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714295 5034 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714298 5034 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714304 5034 flags.go:64] FLAG: --kubelet-cgroups="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714308 5034 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714312 5034 flags.go:64] FLAG: --lock-file="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714316 5034 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714320 5034 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714324 5034 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714330 5034 flags.go:64] FLAG: --log-json-split-stream="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714334 5034 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714337 5034 flags.go:64] FLAG: --log-text-split-stream="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714341 5034 flags.go:64] FLAG: --logging-format="text" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714345 5034 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714350 5034 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714354 5034 flags.go:64] FLAG: --manifest-url="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714357 5034 flags.go:64] FLAG: --manifest-url-header="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714363 5034 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714368 5034 flags.go:64] FLAG: --max-open-files="1000000" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714374 5034 flags.go:64] FLAG: --max-pods="110" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714378 5034 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714382 5034 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714387 5034 flags.go:64] FLAG: --memory-manager-policy="None" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714391 5034 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714396 5034 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714400 5034 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714404 5034 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714413 5034 flags.go:64] FLAG: --node-status-max-images="50" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714417 5034 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714422 5034 flags.go:64] FLAG: --oom-score-adj="-999" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714426 5034 flags.go:64] FLAG: --pod-cidr="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714430 5034 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714436 5034 flags.go:64] FLAG: --pod-manifest-path="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714446 5034 flags.go:64] FLAG: --pod-max-pids="-1" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714451 5034 flags.go:64] FLAG: --pods-per-core="0" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714455 5034 flags.go:64] FLAG: --port="10250" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714459 5034 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714463 5034 flags.go:64] FLAG: --provider-id="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714467 5034 flags.go:64] FLAG: --qos-reserved="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714471 5034 flags.go:64] FLAG: --read-only-port="10255" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714475 5034 flags.go:64] FLAG: --register-node="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714479 5034 flags.go:64] FLAG: --register-schedulable="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714483 5034 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714494 5034 flags.go:64] FLAG: --registry-burst="10" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714498 5034 flags.go:64] FLAG: --registry-qps="5" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714502 5034 flags.go:64] FLAG: --reserved-cpus="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714506 5034 flags.go:64] FLAG: --reserved-memory="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714511 5034 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714515 5034 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714519 5034 flags.go:64] FLAG: --rotate-certificates="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714523 5034 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714528 5034 flags.go:64] FLAG: --runonce="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714532 5034 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714536 5034 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714540 5034 flags.go:64] FLAG: --seccomp-default="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714549 5034 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714554 5034 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714558 5034 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714563 5034 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714567 5034 flags.go:64] FLAG: --storage-driver-password="root" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714571 5034 flags.go:64] FLAG: --storage-driver-secure="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714575 5034 flags.go:64] FLAG: --storage-driver-table="stats" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714579 5034 flags.go:64] FLAG: --storage-driver-user="root" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714583 5034 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714587 5034 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714591 5034 flags.go:64] FLAG: --system-cgroups="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714595 5034 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714602 5034 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714605 5034 flags.go:64] FLAG: --tls-cert-file="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714610 5034 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714616 5034 flags.go:64] FLAG: --tls-min-version="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714619 5034 flags.go:64] FLAG: --tls-private-key-file="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714624 5034 flags.go:64] FLAG: --topology-manager-policy="none" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714628 5034 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714632 5034 flags.go:64] FLAG: --topology-manager-scope="container" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714636 5034 flags.go:64] FLAG: --v="2" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714641 5034 flags.go:64] FLAG: --version="false" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714646 5034 flags.go:64] FLAG: --vmodule="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714652 5034 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.714656 5034 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714749 5034 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714753 5034 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714758 5034 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714762 5034 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714765 5034 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714770 5034 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714774 5034 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714780 5034 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714784 5034 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714788 5034 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714792 5034 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714797 5034 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714801 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714805 5034 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714809 5034 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714812 5034 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714816 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714820 5034 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714824 5034 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714827 5034 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714831 5034 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714834 5034 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714838 5034 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714843 5034 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714847 5034 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714851 5034 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714854 5034 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714858 5034 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714863 5034 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714867 5034 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714871 5034 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714876 5034 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714880 5034 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714885 5034 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714889 5034 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714892 5034 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714896 5034 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714899 5034 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714903 5034 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714908 5034 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714912 5034 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714916 5034 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714920 5034 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714923 5034 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714927 5034 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714930 5034 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714934 5034 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714937 5034 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714940 5034 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714944 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714947 5034 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714951 5034 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714954 5034 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714958 5034 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714961 5034 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714964 5034 feature_gate.go:330] unrecognized feature gate: Example Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714968 5034 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714971 5034 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714974 5034 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714980 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714983 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714987 5034 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714990 5034 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714994 5034 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.714997 5034 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.715001 5034 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.715004 5034 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.715008 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.715011 5034 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.715014 5034 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.715018 5034 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.715026 5034 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.721775 5034 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.721818 5034 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721927 5034 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721937 5034 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721941 5034 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721960 5034 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721964 5034 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721969 5034 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721973 5034 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721976 5034 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721980 5034 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721985 5034 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721989 5034 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721993 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.721996 5034 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722000 5034 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722004 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722008 5034 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722011 5034 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722015 5034 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722018 5034 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722022 5034 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722037 5034 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722041 5034 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722044 5034 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722048 5034 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722051 5034 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722055 5034 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722059 5034 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722063 5034 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722066 5034 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722070 5034 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722090 5034 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722094 5034 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722097 5034 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722101 5034 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722107 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722111 5034 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722114 5034 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722118 5034 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722122 5034 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722127 5034 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722132 5034 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722137 5034 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722142 5034 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722147 5034 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722151 5034 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722171 5034 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722175 5034 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722181 5034 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722188 5034 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722192 5034 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722196 5034 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722201 5034 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722206 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722210 5034 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722214 5034 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722218 5034 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722222 5034 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722225 5034 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722229 5034 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722246 5034 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722250 5034 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722255 5034 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722259 5034 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722263 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722267 5034 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722270 5034 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722274 5034 feature_gate.go:330] unrecognized feature gate: Example Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722278 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722281 5034 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722285 5034 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722289 5034 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.722296 5034 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722460 5034 feature_gate.go:330] unrecognized feature gate: Example Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722479 5034 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722484 5034 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722487 5034 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722491 5034 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722495 5034 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722499 5034 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722503 5034 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722507 5034 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722511 5034 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722515 5034 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722519 5034 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722523 5034 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722527 5034 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722530 5034 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722534 5034 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722538 5034 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722555 5034 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722559 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722563 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722566 5034 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722570 5034 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722573 5034 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722577 5034 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722581 5034 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722585 5034 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722588 5034 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722592 5034 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722597 5034 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722603 5034 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722607 5034 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722612 5034 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722616 5034 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722637 5034 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722643 5034 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722649 5034 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722655 5034 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722659 5034 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722663 5034 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722667 5034 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722671 5034 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722675 5034 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722679 5034 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722683 5034 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722687 5034 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722691 5034 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722710 5034 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722714 5034 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722718 5034 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722722 5034 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722726 5034 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722731 5034 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722735 5034 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722740 5034 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722745 5034 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722749 5034 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722754 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722758 5034 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722763 5034 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722768 5034 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722772 5034 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722791 5034 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722795 5034 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722799 5034 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722802 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722806 5034 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722809 5034 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722813 5034 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722816 5034 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722819 5034 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.722824 5034 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.722830 5034 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.723035 5034 server.go:940] "Client rotation is on, will bootstrap in background" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.729017 5034 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.729196 5034 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.730019 5034 server.go:997] "Starting client certificate rotation" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.730109 5034 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.730590 5034 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 17:11:44.318813071 +0000 UTC Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.730708 5034 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.737794 5034 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.739563 5034 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.740502 5034 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.746587 5034 log.go:25] "Validated CRI v1 runtime API" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.759296 5034 log.go:25] "Validated CRI v1 image API" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.760538 5034 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.762380 5034 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-05-21-46-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.762432 5034 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.787295 5034 manager.go:217] Machine: {Timestamp:2026-01-05 21:51:47.785763458 +0000 UTC m=+0.157762967 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:098a0fe6-e384-4f14-835d-619afd5e29b6 BootID:c744546f-b651-4674-9e81-ae7afa931a00 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:20:4c:ad Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:20:4c:ad Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8a:23:38 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b8:6f:d5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c5:e8:dc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:48:32:79 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:1d:0e:b3 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:60:b0:05:24:67 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f2:6e:d2:e6:fa:e7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.787496 5034 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.787625 5034 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.788005 5034 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.788168 5034 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.788197 5034 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.788358 5034 topology_manager.go:138] "Creating topology manager with none policy" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.788375 5034 container_manager_linux.go:303] "Creating device plugin manager" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.788571 5034 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.788609 5034 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.788920 5034 state_mem.go:36] "Initialized new in-memory state store" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.789007 5034 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.789556 5034 kubelet.go:418] "Attempting to sync node with API server" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.789575 5034 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.789599 5034 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.789614 5034 kubelet.go:324] "Adding apiserver pod source" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.789626 5034 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.791363 5034 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.791532 5034 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.791549 5034 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.791587 5034 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.791609 5034 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.791779 5034 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792363 5034 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792787 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792807 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792815 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792822 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792832 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792839 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792845 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792855 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792862 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792871 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792893 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.792901 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.793063 5034 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.793447 5034 server.go:1280] "Started kubelet" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.793844 5034 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.793961 5034 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.793969 5034 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.794622 5034 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.795014 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.795036 5034 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.795330 5034 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.795352 5034 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.795186 5034 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.156:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1887f439c947690e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 21:51:47.793422606 +0000 UTC m=+0.165422055,LastTimestamp:2026-01-05 21:51:47.793422606 +0000 UTC m=+0.165422055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.795443 5034 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.795479 5034 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 05 21:51:47 crc systemd[1]: Started Kubernetes Kubelet. Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.795152 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:59:24.786361434 +0000 UTC Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.795844 5034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="200ms" Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.795861 5034 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.795978 5034 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.796138 5034 factory.go:55] Registering systemd factory Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.796160 5034 factory.go:221] Registration of the systemd container factory successfully Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.796419 5034 factory.go:153] Registering CRI-O factory Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.796439 5034 factory.go:221] Registration of the crio container factory successfully Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.796530 5034 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.796549 5034 factory.go:103] Registering Raw factory Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.796563 5034 manager.go:1196] Started watching for new ooms in manager Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.797039 5034 manager.go:319] Starting recovery of all containers Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.797678 5034 server.go:460] "Adding debug handlers to kubelet server" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807508 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807557 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807574 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807589 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807602 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807615 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807627 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807667 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807683 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807695 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807706 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807718 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807729 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807743 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807756 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807768 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807780 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807793 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807806 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807818 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807830 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807846 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807857 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807869 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807901 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807912 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807927 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807940 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807953 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807966 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807979 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.807990 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808000 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808009 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808037 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808047 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808056 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808067 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808111 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808127 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808198 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808216 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808228 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808239 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808272 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808287 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808300 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808312 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808323 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808335 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808348 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808378 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808395 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808408 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808429 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808443 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808458 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808470 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808483 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808495 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808507 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808519 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808531 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808544 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808555 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808569 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808585 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808598 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808612 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808625 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808638 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808652 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808665 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808676 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808687 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808701 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808715 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808729 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808743 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808764 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808778 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808789 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808802 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808814 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808826 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808837 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808850 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808861 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808873 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808884 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808896 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808908 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808921 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808932 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808944 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808956 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808969 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808981 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.808995 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809007 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809019 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809636 5034 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809659 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809671 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809681 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809695 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809710 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809955 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809966 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809977 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809989 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.809999 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810010 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810020 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810031 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810041 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810051 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810061 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810070 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810108 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810122 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810135 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810148 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810160 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810171 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810190 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810203 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810215 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810228 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810242 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810255 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810267 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810279 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810291 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810307 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810320 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810335 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810347 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810361 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810374 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810387 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810400 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810412 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810425 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810439 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810452 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810466 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810479 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810492 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810505 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810518 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810530 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810544 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810558 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810570 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810583 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810597 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810611 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810625 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810637 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810650 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810663 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810678 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810692 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810702 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810711 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810721 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810731 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810741 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810752 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810762 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810772 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810782 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810791 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810801 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810811 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810821 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810831 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810842 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810854 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810864 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810874 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810883 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810942 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810951 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810962 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810971 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810981 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.810990 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811000 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811009 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811018 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811027 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811036 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811046 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811055 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811064 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811116 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811131 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811141 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811150 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811158 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811167 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811176 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811185 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811196 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811206 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811215 5034 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811224 5034 reconstruct.go:97] "Volume reconstruction finished" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.811230 5034 reconciler.go:26] "Reconciler: start to sync state" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.819839 5034 manager.go:324] Recovery completed Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.829504 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.831193 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.831234 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.831243 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.831934 5034 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.831948 5034 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.831964 5034 state_mem.go:36] "Initialized new in-memory state store" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.835657 5034 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.837118 5034 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.837150 5034 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.837171 5034 kubelet.go:2335] "Starting kubelet main sync loop" Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.837205 5034 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 05 21:51:47 crc kubenswrapper[5034]: W0105 21:51:47.837733 5034 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.837789 5034 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.887299 5034 policy_none.go:49] "None policy: Start" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.888265 5034 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.888295 5034 state_mem.go:35] "Initializing new in-memory state store" Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.896887 5034 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.938106 5034 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.947316 5034 manager.go:334] "Starting Device Plugin manager" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.947843 5034 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.947997 5034 server.go:79] "Starting device plugin registration server" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.948757 5034 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.948885 5034 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.949239 5034 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.949331 5034 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 05 21:51:47 crc kubenswrapper[5034]: I0105 21:51:47.949340 5034 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.959147 5034 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 05 21:51:47 crc kubenswrapper[5034]: E0105 21:51:47.997002 5034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="400ms" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.049845 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.050998 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.051039 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.051049 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.051086 5034 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:51:48 crc kubenswrapper[5034]: E0105 21:51:48.051778 5034 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.138791 5034 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.138904 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.140144 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.140172 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.140180 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.140270 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.140486 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.140534 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141101 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141124 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141133 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141212 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141303 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141324 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141833 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141882 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141896 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141923 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141942 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.141949 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.142057 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.142107 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.142129 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.142138 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.142122 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.142154 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.142957 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.142984 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.142998 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.143159 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.143523 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.143559 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.143579 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.143588 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.143565 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.143938 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.143956 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.143965 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.144091 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.144111 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.144325 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.144343 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.144351 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.145270 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.145285 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.145292 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216301 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216336 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216363 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216383 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216406 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216462 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216532 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216570 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216591 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216607 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216623 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216642 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216658 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216691 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.216724 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.251914 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.253234 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.253261 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.253269 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.253290 5034 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:51:48 crc kubenswrapper[5034]: E0105 21:51:48.253673 5034 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318013 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318074 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318135 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318158 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318181 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318202 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318225 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318245 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318265 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318287 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318311 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318312 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318312 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318349 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318266 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318372 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318382 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318414 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318405 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318450 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318451 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318397 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318490 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318506 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318524 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318407 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318556 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318288 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318567 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.318687 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: E0105 21:51:48.397966 5034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="800ms" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.472991 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.479794 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: W0105 21:51:48.489959 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-45739b41cf75062813b1a217779c1364cbeac54d72698cec72ea83497b154952 WatchSource:0}: Error finding container 45739b41cf75062813b1a217779c1364cbeac54d72698cec72ea83497b154952: Status 404 returned error can't find the container with id 45739b41cf75062813b1a217779c1364cbeac54d72698cec72ea83497b154952 Jan 05 21:51:48 crc kubenswrapper[5034]: W0105 21:51:48.493381 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-291de3bfe5893914f609d924494a98dc9890bc07c67ea5138e7d95ab05c126b3 WatchSource:0}: Error finding container 291de3bfe5893914f609d924494a98dc9890bc07c67ea5138e7d95ab05c126b3: Status 404 returned error can't find the container with id 291de3bfe5893914f609d924494a98dc9890bc07c67ea5138e7d95ab05c126b3 Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.493935 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: W0105 21:51:48.510984 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-20056bcbe98b819e9de3650c502aa9721c5a607226179b6cb96efa92eb81d112 WatchSource:0}: Error finding container 20056bcbe98b819e9de3650c502aa9721c5a607226179b6cb96efa92eb81d112: Status 404 returned error can't find the container with id 20056bcbe98b819e9de3650c502aa9721c5a607226179b6cb96efa92eb81d112 Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.513378 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.520638 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:48 crc kubenswrapper[5034]: W0105 21:51:48.526543 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-21db5dafb14e66ac50ad775d3af090f0b1c760df528a3410e2937108a3e9faa6 WatchSource:0}: Error finding container 21db5dafb14e66ac50ad775d3af090f0b1c760df528a3410e2937108a3e9faa6: Status 404 returned error can't find the container with id 21db5dafb14e66ac50ad775d3af090f0b1c760df528a3410e2937108a3e9faa6 Jan 05 21:51:48 crc kubenswrapper[5034]: W0105 21:51:48.536824 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-2e226d5a118c211b383352cb26b668aff7f49191bcbcae9ee12db2f15173e1f8 WatchSource:0}: Error finding container 2e226d5a118c211b383352cb26b668aff7f49191bcbcae9ee12db2f15173e1f8: Status 404 returned error can't find the container with id 2e226d5a118c211b383352cb26b668aff7f49191bcbcae9ee12db2f15173e1f8 Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.654069 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.656047 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.656095 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.656106 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.656134 5034 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:51:48 crc kubenswrapper[5034]: E0105 21:51:48.656566 5034 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.794683 5034 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.795660 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:36:25.465416131 +0000 UTC Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.841433 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36"} Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.841544 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"21db5dafb14e66ac50ad775d3af090f0b1c760df528a3410e2937108a3e9faa6"} Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.842845 5034 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6" exitCode=0 Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.842916 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6"} Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.843018 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"20056bcbe98b819e9de3650c502aa9721c5a607226179b6cb96efa92eb81d112"} Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.843238 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.844237 5034 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b0cbbde2dd1bf0e95dcfe2616b983655502935d27b61856957c5addb14a721d3" exitCode=0 Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.844269 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.844296 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.844306 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.844306 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b0cbbde2dd1bf0e95dcfe2616b983655502935d27b61856957c5addb14a721d3"} Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.844336 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"291de3bfe5893914f609d924494a98dc9890bc07c67ea5138e7d95ab05c126b3"} Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.846160 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d"} Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.846184 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"45739b41cf75062813b1a217779c1364cbeac54d72698cec72ea83497b154952"} Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.846268 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.847128 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.847182 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.847192 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.848386 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21"} Jan 05 21:51:48 crc kubenswrapper[5034]: I0105 21:51:48.848416 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2e226d5a118c211b383352cb26b668aff7f49191bcbcae9ee12db2f15173e1f8"} Jan 05 21:51:48 crc kubenswrapper[5034]: W0105 21:51:48.854160 5034 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:48 crc kubenswrapper[5034]: E0105 21:51:48.854245 5034 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:51:48 crc kubenswrapper[5034]: W0105 21:51:48.855406 5034 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:48 crc kubenswrapper[5034]: E0105 21:51:48.855450 5034 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:51:49 crc kubenswrapper[5034]: W0105 21:51:49.014858 5034 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:49 crc kubenswrapper[5034]: E0105 21:51:49.014953 5034 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:51:49 crc kubenswrapper[5034]: E0105 21:51:49.199349 5034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="1.6s" Jan 05 21:51:49 crc kubenswrapper[5034]: W0105 21:51:49.387385 5034 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:49 crc kubenswrapper[5034]: E0105 21:51:49.387470 5034 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.456672 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.457807 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.457845 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.457854 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.457879 5034 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:51:49 crc kubenswrapper[5034]: E0105 21:51:49.458306 5034 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.794941 5034 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.797059 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:10:31.18598892 +0000 UTC Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.826279 5034 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 05 21:51:49 crc kubenswrapper[5034]: E0105 21:51:49.827443 5034 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.853870 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d"} Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.853904 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3"} Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.853914 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122"} Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.853977 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.854709 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.854739 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.854747 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.856314 5034 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36" exitCode=0 Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.856373 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36"} Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.856449 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.857386 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.857412 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.857424 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.858517 5034 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc" exitCode=0 Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.858561 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc"} Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.858641 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.859072 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.859594 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.859614 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.859621 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.860048 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.860065 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.860072 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.862844 5034 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d" exitCode=0 Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.862917 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.863313 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d"} Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.863343 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285"} Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.863356 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7"} Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.863367 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c"} Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.863433 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.864106 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.864132 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.864142 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.864583 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.864608 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:49 crc kubenswrapper[5034]: I0105 21:51:49.864618 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.797916 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:01:53.082157201 +0000 UTC Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.866561 5034 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c" exitCode=0 Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.866642 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c"} Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.866773 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.867773 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.867797 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.867825 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.868130 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6d2d76a740c85091de13be46810a17d1ff7717a513b2df1c877d75a5c5cc16fe"} Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.868201 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.868819 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.868839 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.868846 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.871541 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.872142 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.872516 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07"} Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.872556 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb"} Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.872576 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0"} Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.872588 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411"} Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.872599 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5"} Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.872889 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.872918 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.872929 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.873591 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.873615 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.873625 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:50 crc kubenswrapper[5034]: I0105 21:51:50.902405 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.059321 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.060355 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.060380 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.060390 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.060408 5034 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.798925 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:52:33.304077781 +0000 UTC Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.878168 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9"} Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.878214 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.878224 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e"} Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.878245 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6"} Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.878264 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996"} Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.878254 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.879853 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.879931 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:51 crc kubenswrapper[5034]: I0105 21:51:51.879947 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.799559 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:46:45.834424483 +0000 UTC Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.886144 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40"} Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.886177 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.886214 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.886251 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.889933 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.889971 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.889986 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.890010 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.890030 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:52 crc kubenswrapper[5034]: I0105 21:51:52.890039 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.038694 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.706712 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.799813 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:10:48.619537546 +0000 UTC Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.888476 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.888483 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.888660 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.889363 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.889400 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.889409 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.889524 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.889548 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:53 crc kubenswrapper[5034]: I0105 21:51:53.889559 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.019762 5034 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.630480 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.630633 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.631611 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.631642 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.631651 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.800656 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:59:52.293730878 +0000 UTC Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.891560 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.892803 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.892864 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:54 crc kubenswrapper[5034]: I0105 21:51:54.892876 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:55 crc kubenswrapper[5034]: I0105 21:51:55.801053 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:53:16.537563986 +0000 UTC Jan 05 21:51:55 crc kubenswrapper[5034]: I0105 21:51:55.801118 5034 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 52h1m20.736449368s for next certificate rotation Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.531016 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.531466 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.533362 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.533535 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.533563 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.539749 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.605512 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.605745 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.607218 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.607393 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.607473 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.896561 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.898211 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.898261 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:56 crc kubenswrapper[5034]: I0105 21:51:56.898273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:57 crc kubenswrapper[5034]: E0105 21:51:57.959383 5034 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.059865 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.060230 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.061918 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.061951 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.061964 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.176578 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.176826 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.176956 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.178694 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.178745 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.178756 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.183282 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.901052 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.902202 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.902314 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:58 crc kubenswrapper[5034]: I0105 21:51:58.902428 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:51:59 crc kubenswrapper[5034]: I0105 21:51:59.331833 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:51:59 crc kubenswrapper[5034]: I0105 21:51:59.903361 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:51:59 crc kubenswrapper[5034]: I0105 21:51:59.904388 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:51:59 crc kubenswrapper[5034]: I0105 21:51:59.904438 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:51:59 crc kubenswrapper[5034]: I0105 21:51:59.904448 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:00 crc kubenswrapper[5034]: I0105 21:52:00.446683 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 05 21:52:00 crc kubenswrapper[5034]: I0105 21:52:00.446856 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:52:00 crc kubenswrapper[5034]: I0105 21:52:00.447918 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:00 crc kubenswrapper[5034]: I0105 21:52:00.447965 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:00 crc kubenswrapper[5034]: I0105 21:52:00.447981 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:00 crc kubenswrapper[5034]: I0105 21:52:00.795123 5034 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 05 21:52:00 crc kubenswrapper[5034]: E0105 21:52:00.800819 5034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 05 21:52:01 crc kubenswrapper[5034]: E0105 21:52:01.060946 5034 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 05 21:52:01 crc kubenswrapper[5034]: W0105 21:52:01.323022 5034 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 05 21:52:01 crc kubenswrapper[5034]: I0105 21:52:01.323115 5034 trace.go:236] Trace[613575906]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 21:51:51.322) (total time: 10000ms): Jan 05 21:52:01 crc kubenswrapper[5034]: Trace[613575906]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (21:52:01.323) Jan 05 21:52:01 crc kubenswrapper[5034]: Trace[613575906]: [10.000982497s] [10.000982497s] END Jan 05 21:52:01 crc kubenswrapper[5034]: E0105 21:52:01.323136 5034 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 05 21:52:01 crc kubenswrapper[5034]: I0105 21:52:01.428379 5034 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 05 21:52:01 crc kubenswrapper[5034]: I0105 21:52:01.428755 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 05 21:52:01 crc kubenswrapper[5034]: I0105 21:52:01.432588 5034 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 05 21:52:01 crc kubenswrapper[5034]: I0105 21:52:01.432648 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 05 21:52:02 crc kubenswrapper[5034]: I0105 21:52:02.332191 5034 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:52:02 crc kubenswrapper[5034]: I0105 21:52:02.332880 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:52:03 crc kubenswrapper[5034]: I0105 21:52:03.711112 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:52:03 crc kubenswrapper[5034]: I0105 21:52:03.711839 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:52:03 crc kubenswrapper[5034]: I0105 21:52:03.713362 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:03 crc kubenswrapper[5034]: I0105 21:52:03.713422 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:03 crc kubenswrapper[5034]: I0105 21:52:03.713440 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:03 crc kubenswrapper[5034]: I0105 21:52:03.716362 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:52:03 crc kubenswrapper[5034]: I0105 21:52:03.910507 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:52:03 crc kubenswrapper[5034]: I0105 21:52:03.911309 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:03 crc kubenswrapper[5034]: I0105 21:52:03.911338 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:03 crc kubenswrapper[5034]: I0105 21:52:03.911348 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:04 crc kubenswrapper[5034]: I0105 21:52:04.261046 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:52:04 crc kubenswrapper[5034]: I0105 21:52:04.261940 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:04 crc kubenswrapper[5034]: I0105 21:52:04.261974 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:04 crc kubenswrapper[5034]: I0105 21:52:04.261987 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:04 crc kubenswrapper[5034]: I0105 21:52:04.262008 5034 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:52:04 crc kubenswrapper[5034]: E0105 21:52:04.264717 5034 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.419030 5034 trace.go:236] Trace[479525244]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 21:51:51.722) (total time: 14696ms): Jan 05 21:52:06 crc kubenswrapper[5034]: Trace[479525244]: ---"Objects listed" error: 14696ms (21:52:06.418) Jan 05 21:52:06 crc kubenswrapper[5034]: Trace[479525244]: [14.696882374s] [14.696882374s] END Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.419123 5034 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.419541 5034 trace.go:236] Trace[1701429798]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 21:51:51.754) (total time: 14664ms): Jan 05 21:52:06 crc kubenswrapper[5034]: Trace[1701429798]: ---"Objects listed" error: 14664ms (21:52:06.419) Jan 05 21:52:06 crc kubenswrapper[5034]: Trace[1701429798]: [14.66452557s] [14.66452557s] END Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.419554 5034 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.419581 5034 trace.go:236] Trace[1286844471]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 21:51:51.487) (total time: 14931ms): Jan 05 21:52:06 crc kubenswrapper[5034]: Trace[1286844471]: ---"Objects listed" error: 14931ms (21:52:06.419) Jan 05 21:52:06 crc kubenswrapper[5034]: Trace[1286844471]: [14.931642083s] [14.931642083s] END Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.419602 5034 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.420338 5034 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.423813 5034 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.443054 5034 csr.go:261] certificate signing request csr-458tq is approved, waiting to be issued Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.452315 5034 csr.go:257] certificate signing request csr-458tq is issued Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.590291 5034 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.609711 5034 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52286->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.609715 5034 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52280->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.609771 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52286->192.168.126.11:17697: read: connection reset by peer" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.609792 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52280->192.168.126.11:17697: read: connection reset by peer" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.610063 5034 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.610132 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.610463 5034 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.610525 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.801828 5034 apiserver.go:52] "Watching apiserver" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.841342 5034 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.841652 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.842001 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.842104 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.842141 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.842188 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.842326 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.842426 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.842473 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.842484 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.843089 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.846130 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.846448 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.848166 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.848363 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.848462 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.848658 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.848826 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.849033 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.849436 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.896445 5034 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.897205 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.907000 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.917613 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.918680 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.919490 5034 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07" exitCode=255 Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.919571 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07"} Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.922733 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.922786 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.922814 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.922834 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.922854 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.922875 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.922896 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.922920 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.922996 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923022 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923045 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923073 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923144 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923168 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923192 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923213 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923236 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923262 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923283 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923303 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923324 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923333 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923346 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923368 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923365 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923365 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923391 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923391 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923401 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923432 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923466 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923496 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923522 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923546 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923576 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923603 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923630 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923654 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923678 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923701 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923727 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923750 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923773 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923799 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923823 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923858 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923887 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923914 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923939 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923963 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923986 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924009 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924034 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924055 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924097 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924120 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924147 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924171 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924194 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924216 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924239 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924261 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924285 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924314 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924336 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924362 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924403 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924427 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924452 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924475 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924499 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924521 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924548 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924572 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924594 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924620 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924643 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924668 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924691 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924714 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924741 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924881 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924908 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924932 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924955 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924978 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925001 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925025 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925057 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925099 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925126 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925150 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925175 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925199 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925225 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925250 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925273 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925384 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925413 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925438 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925462 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925485 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925512 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925536 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925562 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925589 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925614 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925637 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925660 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923565 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923606 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923644 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923686 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923736 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923746 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923826 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923839 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923884 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.923987 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924069 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924231 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924296 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.924314 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925192 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925498 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925674 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926009 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.925684 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926070 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926121 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926144 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926168 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926194 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926217 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926238 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926261 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926283 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926303 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926325 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926346 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926366 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926387 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926408 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926429 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926451 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926477 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926500 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926522 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926545 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926566 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926587 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926612 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926637 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926641 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926658 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926663 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926682 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926704 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926729 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926876 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926918 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926938 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926957 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926973 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926990 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927138 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927186 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927217 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927245 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927271 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927295 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927318 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927337 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927358 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927380 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927401 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927425 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927447 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927471 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927496 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927515 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927540 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927565 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927588 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927610 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927634 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927661 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927688 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927717 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927741 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927764 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927787 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927814 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927840 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927865 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927893 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927916 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927938 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927964 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927986 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928002 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928020 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928036 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928052 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928101 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928128 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928152 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928176 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928198 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928219 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928242 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928267 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928289 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928312 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928335 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928358 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928381 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928405 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928427 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928450 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928472 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928528 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928557 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928584 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928609 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928642 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928663 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928692 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928718 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928744 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928766 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928792 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928815 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928845 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928874 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928941 5034 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928964 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928979 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929003 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929017 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929030 5034 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929047 5034 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929061 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929097 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929111 5034 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929125 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929137 5034 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929149 5034 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929160 5034 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929177 5034 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929189 5034 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929202 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929214 5034 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929231 5034 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929245 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929258 5034 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929272 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929284 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929298 5034 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929314 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929327 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.935484 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926693 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926827 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.926833 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.941204 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927007 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927613 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927654 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927668 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927750 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.927795 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928025 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928337 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928387 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928469 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928816 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.928907 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929281 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929617 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929799 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929852 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.929909 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.930221 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.930232 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.930292 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.930550 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.930573 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.930728 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.930827 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.930870 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.931017 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.931065 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.931361 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.931386 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.931477 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.931574 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.931611 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:52:07.431507707 +0000 UTC m=+19.803507226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.931967 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.932291 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.932315 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.932361 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.932378 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.932545 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.932682 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.932756 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.932772 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.932972 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.932991 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.933029 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.933773 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.933956 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.934063 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.931959 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.934500 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.934546 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.934779 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.935003 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.935016 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.935146 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.935326 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.935585 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.935678 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.935701 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.935913 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.936021 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.936021 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.936545 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.936953 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.936971 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.937191 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.937390 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.937552 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.938002 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.938173 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.938611 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.939555 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.939723 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.939730 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.939971 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.940205 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.940402 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.940636 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.940831 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.940942 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.940972 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.941514 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.941671 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.942106 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.941920 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.942149 5034 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.942097 5034 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.943312 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.943409 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:07.443384086 +0000 UTC m=+19.815383595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.943764 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.944169 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.944663 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.944720 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.944867 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.945086 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.945111 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.945127 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.945159 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.944354 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.945384 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.945589 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.945605 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.945694 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.945895 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.945990 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.946284 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.946312 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.946359 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.946650 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.946853 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.946862 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.946939 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.946994 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.947580 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.947971 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.948005 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.948070 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.948105 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.948448 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.948665 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.948689 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.948689 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.949023 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.949151 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.949350 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.949450 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.949607 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.949611 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.949695 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.949850 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.950026 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.950306 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.950351 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.950544 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.950768 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.950824 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.951345 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.951476 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.951483 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.951734 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.952250 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.952482 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.952673 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.952801 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.952829 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.952969 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.953008 5034 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.953237 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:07.453215709 +0000 UTC m=+19.825215148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.953319 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.953041 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.953716 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.953837 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.954113 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.954340 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.954715 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.954931 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.955173 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.955684 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.955713 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.955728 5034 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.955790 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:07.45577059 +0000 UTC m=+19.827770119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.955871 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.955884 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.955893 5034 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:06 crc kubenswrapper[5034]: E0105 21:52:06.955923 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:07.455914024 +0000 UTC m=+19.827913543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.956761 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.956791 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.962624 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.963125 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.963197 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.963356 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.963558 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.963556 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.963680 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.963929 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.963974 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.964040 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.964137 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.965208 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.969039 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.969090 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.969931 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.971828 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.972813 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.972998 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.976951 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.977661 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.978352 5034 scope.go:117] "RemoveContainer" containerID="fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.984383 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.989267 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 21:52:06 crc kubenswrapper[5034]: I0105 21:52:06.995922 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.003154 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.006051 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.018716 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030515 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030564 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030604 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030614 5034 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030623 5034 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030632 5034 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030640 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030648 5034 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030672 5034 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030680 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030688 5034 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030684 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030684 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030696 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030750 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030765 5034 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030778 5034 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030789 5034 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030801 5034 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030811 5034 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030820 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030829 5034 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030837 5034 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030846 5034 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030854 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030862 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030870 5034 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030878 5034 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030885 5034 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030893 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030901 5034 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030909 5034 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030917 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030924 5034 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030941 5034 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030954 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030962 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030970 5034 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030979 5034 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030987 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.030995 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031003 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031013 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031025 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031036 5034 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031046 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031056 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031066 5034 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031092 5034 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031104 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031116 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031125 5034 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031133 5034 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031141 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031149 5034 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031157 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031165 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031173 5034 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031182 5034 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031189 5034 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031197 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031205 5034 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031214 5034 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031222 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031230 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031239 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031247 5034 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031256 5034 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031264 5034 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031272 5034 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031280 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031288 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031297 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031305 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031313 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031321 5034 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031330 5034 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031338 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031345 5034 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031354 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031361 5034 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031369 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031377 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031384 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031392 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031401 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031409 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031417 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031424 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031433 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031440 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031448 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031420 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031456 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031576 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031588 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031598 5034 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031606 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031614 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031623 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031634 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031644 5034 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031654 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031664 5034 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031674 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031683 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031690 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031698 5034 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031705 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031713 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031720 5034 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031727 5034 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031736 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031744 5034 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031752 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031761 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031769 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031777 5034 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031785 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031792 5034 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031801 5034 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031810 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031819 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031827 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031836 5034 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031844 5034 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031851 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031859 5034 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031867 5034 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031875 5034 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031885 5034 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031896 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031906 5034 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031916 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031926 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031935 5034 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031943 5034 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031951 5034 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031960 5034 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031967 5034 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031975 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031983 5034 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031991 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.031999 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032008 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032016 5034 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032023 5034 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032031 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032039 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032048 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032057 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032065 5034 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032072 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032109 5034 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032118 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032126 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032135 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032142 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032150 5034 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032158 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032166 5034 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032173 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032180 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032189 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032196 5034 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032203 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032211 5034 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032218 5034 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032225 5034 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032232 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032240 5034 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032248 5034 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.032255 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.041248 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.051248 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.063860 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.073580 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.083384 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.159069 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.168931 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.169401 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d2ab9936754bde45379733bde84739857c9a12df8d7da14002237fb9048a7ccf WatchSource:0}: Error finding container d2ab9936754bde45379733bde84739857c9a12df8d7da14002237fb9048a7ccf: Status 404 returned error can't find the container with id d2ab9936754bde45379733bde84739857c9a12df8d7da14002237fb9048a7ccf Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.178670 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-c07ac2cf1608ee1847e585e104d246533fcd958abad956159239deb40e0373d3 WatchSource:0}: Error finding container c07ac2cf1608ee1847e585e104d246533fcd958abad956159239deb40e0373d3: Status 404 returned error can't find the container with id c07ac2cf1608ee1847e585e104d246533fcd958abad956159239deb40e0373d3 Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.180038 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.435508 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.435691 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:52:08.435664243 +0000 UTC m=+20.807663682 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.454284 5034 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-05 21:47:06 +0000 UTC, rotation deadline is 2026-10-19 06:48:35.281876373 +0000 UTC Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.454341 5034 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6872h56m27.827537751s for next certificate rotation Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.536883 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.536928 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.536955 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.536984 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537094 5034 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537105 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537124 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537137 5034 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537168 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:08.537150699 +0000 UTC m=+20.909150138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537187 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:08.53717907 +0000 UTC m=+20.909178509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537186 5034 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537229 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537276 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537291 5034 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537276 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:08.537257712 +0000 UTC m=+20.909257151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:07 crc kubenswrapper[5034]: E0105 21:52:07.537353 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:08.537335344 +0000 UTC m=+20.909334783 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.574546 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-frlwc"] Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.574924 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hzbjx"] Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.575122 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-tsch6"] Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.575132 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.575253 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hzbjx" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.575530 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.577026 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.577027 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.577366 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.577392 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.577431 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.577372 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.577538 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.577541 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.579287 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.579327 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.579511 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.579578 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.579711 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.587965 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.597910 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.606940 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.617658 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.631780 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.637846 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-var-lib-cni-bin\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.637878 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-daemon-config\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.637894 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-run-multus-certs\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.637912 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-os-release\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.637936 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-cnibin\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.637958 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/691cc76e-ed89-4547-9bb1-58b03c8f7932-cni-binary-copy\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.637971 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-run-k8s-cni-cncf-io\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.637987 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-var-lib-kubelet\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638007 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-etc-kubernetes\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638020 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-cni-dir\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638036 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdd89329-d259-499c-bfe9-747d547d10f6-proxy-tls\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638050 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-system-cni-dir\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638071 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-var-lib-cni-multus\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638103 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xwg2\" (UniqueName: \"kubernetes.io/projected/bdd89329-d259-499c-bfe9-747d547d10f6-kube-api-access-4xwg2\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638134 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7jhx\" (UniqueName: \"kubernetes.io/projected/33d2b819-50a6-427b-8503-a87d0fafc058-kube-api-access-j7jhx\") pod \"node-resolver-hzbjx\" (UID: \"33d2b819-50a6-427b-8503-a87d0fafc058\") " pod="openshift-dns/node-resolver-hzbjx" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638159 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdd89329-d259-499c-bfe9-747d547d10f6-rootfs\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638173 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-run-netns\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638186 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-hostroot\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638201 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-conf-dir\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638223 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-socket-dir-parent\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638239 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/33d2b819-50a6-427b-8503-a87d0fafc058-hosts-file\") pod \"node-resolver-hzbjx\" (UID: \"33d2b819-50a6-427b-8503-a87d0fafc058\") " pod="openshift-dns/node-resolver-hzbjx" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638267 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sg5b\" (UniqueName: \"kubernetes.io/projected/691cc76e-ed89-4547-9bb1-58b03c8f7932-kube-api-access-6sg5b\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.638280 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdd89329-d259-499c-bfe9-747d547d10f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.641107 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.649579 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.660943 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.679349 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.688340 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.700801 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.710466 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.721348 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.731317 5034 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731514 5034 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731554 5034 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731564 5034 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731605 5034 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731625 5034 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731638 5034 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731648 5034 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731658 5034 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731677 5034 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731737 5034 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731763 5034 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.731749 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c/status\": read tcp 38.102.83.156:59100->38.102.83.156:6443: use of closed network connection" Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731774 5034 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731805 5034 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731828 5034 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731831 5034 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731584 5034 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731854 5034 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731865 5034 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731857 5034 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731789 5034 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731882 5034 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.731885 5034 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739134 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-system-cni-dir\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739196 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-cni-dir\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739216 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdd89329-d259-499c-bfe9-747d547d10f6-proxy-tls\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739249 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-var-lib-cni-multus\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739269 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xwg2\" (UniqueName: \"kubernetes.io/projected/bdd89329-d259-499c-bfe9-747d547d10f6-kube-api-access-4xwg2\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739289 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7jhx\" (UniqueName: \"kubernetes.io/projected/33d2b819-50a6-427b-8503-a87d0fafc058-kube-api-access-j7jhx\") pod \"node-resolver-hzbjx\" (UID: \"33d2b819-50a6-427b-8503-a87d0fafc058\") " pod="openshift-dns/node-resolver-hzbjx" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739313 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdd89329-d259-499c-bfe9-747d547d10f6-rootfs\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739337 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-run-netns\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739354 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-hostroot\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739372 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-conf-dir\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739396 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-socket-dir-parent\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739414 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/33d2b819-50a6-427b-8503-a87d0fafc058-hosts-file\") pod \"node-resolver-hzbjx\" (UID: \"33d2b819-50a6-427b-8503-a87d0fafc058\") " pod="openshift-dns/node-resolver-hzbjx" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739442 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sg5b\" (UniqueName: \"kubernetes.io/projected/691cc76e-ed89-4547-9bb1-58b03c8f7932-kube-api-access-6sg5b\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739460 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdd89329-d259-499c-bfe9-747d547d10f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739488 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-os-release\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739506 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-var-lib-cni-bin\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739527 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-daemon-config\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739565 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-run-multus-certs\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739598 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-cnibin\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739617 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/691cc76e-ed89-4547-9bb1-58b03c8f7932-cni-binary-copy\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.739635 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-run-k8s-cni-cncf-io\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740215 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-conf-dir\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740276 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-run-multus-certs\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740315 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-var-lib-cni-bin\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740323 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-cnibin\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740437 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-cni-dir\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740440 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-var-lib-cni-multus\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740504 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-os-release\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740543 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdd89329-d259-499c-bfe9-747d547d10f6-rootfs\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740553 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-system-cni-dir\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740583 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-run-netns\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740596 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/33d2b819-50a6-427b-8503-a87d0fafc058-hosts-file\") pod \"node-resolver-hzbjx\" (UID: \"33d2b819-50a6-427b-8503-a87d0fafc058\") " pod="openshift-dns/node-resolver-hzbjx" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740620 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-socket-dir-parent\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740727 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-run-k8s-cni-cncf-io\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740858 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-hostroot\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740908 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-var-lib-kubelet\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740936 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-etc-kubernetes\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740936 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-host-var-lib-kubelet\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.740999 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/691cc76e-ed89-4547-9bb1-58b03c8f7932-etc-kubernetes\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.741134 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/691cc76e-ed89-4547-9bb1-58b03c8f7932-multus-daemon-config\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.741167 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/691cc76e-ed89-4547-9bb1-58b03c8f7932-cni-binary-copy\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.741412 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdd89329-d259-499c-bfe9-747d547d10f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.744282 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdd89329-d259-499c-bfe9-747d547d10f6-proxy-tls\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.756857 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sg5b\" (UniqueName: \"kubernetes.io/projected/691cc76e-ed89-4547-9bb1-58b03c8f7932-kube-api-access-6sg5b\") pod \"multus-tsch6\" (UID: \"691cc76e-ed89-4547-9bb1-58b03c8f7932\") " pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.757794 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xwg2\" (UniqueName: \"kubernetes.io/projected/bdd89329-d259-499c-bfe9-747d547d10f6-kube-api-access-4xwg2\") pod \"machine-config-daemon-frlwc\" (UID: \"bdd89329-d259-499c-bfe9-747d547d10f6\") " pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.762524 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.762841 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7jhx\" (UniqueName: \"kubernetes.io/projected/33d2b819-50a6-427b-8503-a87d0fafc058-kube-api-access-j7jhx\") pod \"node-resolver-hzbjx\" (UID: \"33d2b819-50a6-427b-8503-a87d0fafc058\") " pod="openshift-dns/node-resolver-hzbjx" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.772433 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.788822 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.803884 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.841748 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.842399 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.843395 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.844038 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.844673 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.845327 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.846067 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.846757 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.847568 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.848201 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.848834 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.849693 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.850344 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.850905 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.852494 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.853300 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.853351 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.854467 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.855001 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.856770 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.857862 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.858648 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.859407 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.859933 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.860767 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.861359 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.862126 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.863020 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.863645 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.864385 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.864934 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.865531 5034 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.865651 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.867119 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.867985 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.869142 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.869724 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.871841 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.873039 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.873727 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.874897 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.875787 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.876927 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.877830 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.879436 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.880834 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.881632 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.882653 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.883911 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.885562 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.886414 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.887070 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.887457 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.888390 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.889449 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.891813 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.892494 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.895422 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.895777 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hzbjx" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.903257 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tsch6" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.918411 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.919047 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33d2b819_50a6_427b_8503_a87d0fafc058.slice/crio-a7c5106515ad104c81ee512116cacd8a631781a87491436fb0ccbfdb1f2bcd3b WatchSource:0}: Error finding container a7c5106515ad104c81ee512116cacd8a631781a87491436fb0ccbfdb1f2bcd3b: Status 404 returned error can't find the container with id a7c5106515ad104c81ee512116cacd8a631781a87491436fb0ccbfdb1f2bcd3b Jan 05 21:52:07 crc kubenswrapper[5034]: W0105 21:52:07.927748 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod691cc76e_ed89_4547_9bb1_58b03c8f7932.slice/crio-fc046c1a83a7f98b84b733d8dafbb8b7041fc455712f4f392dbf9901d65b33ce WatchSource:0}: Error finding container fc046c1a83a7f98b84b733d8dafbb8b7041fc455712f4f392dbf9901d65b33ce: Status 404 returned error can't find the container with id fc046c1a83a7f98b84b733d8dafbb8b7041fc455712f4f392dbf9901d65b33ce Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.932735 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"8f13e7261f1880c9e5bf7f17a2f10bce29922cdda59c9b1835795a71733991eb"} Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.935114 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d"} Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.935229 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316"} Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.935468 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"55ff116a2ef8e87cc68f8a81201fbc3d9be9aeb68316fd22a5b24344f7954792"} Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.936411 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.937423 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c07ac2cf1608ee1847e585e104d246533fcd958abad956159239deb40e0373d3"} Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.946845 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50"} Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.946913 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d2ab9936754bde45379733bde84739857c9a12df8d7da14002237fb9048a7ccf"} Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.948040 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-95tx4"] Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.948826 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.950448 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6fmfz"] Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.950574 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.950606 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.950838 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.951399 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.955634 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.955594 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.955777 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.955633 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.955857 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.955994 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.956019 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.956236 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.958357 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab"} Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.958519 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.962562 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hzbjx" event={"ID":"33d2b819-50a6-427b-8503-a87d0fafc058","Type":"ContainerStarted","Data":"a7c5106515ad104c81ee512116cacd8a631781a87491436fb0ccbfdb1f2bcd3b"} Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.975310 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:07 crc kubenswrapper[5034]: I0105 21:52:07.991750 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.006570 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.028013 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.043743 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-systemd-units\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044047 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044069 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-var-lib-openvswitch\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044108 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-netns\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044123 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-openvswitch\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044141 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v7pr\" (UniqueName: \"kubernetes.io/projected/788e0f44-29c3-4c4a-afe9-33c26a965d74-kube-api-access-4v7pr\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044159 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-slash\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044182 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wbdb\" (UniqueName: \"kubernetes.io/projected/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-kube-api-access-4wbdb\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044198 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-env-overrides\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044223 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-os-release\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044240 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044257 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044273 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-kubelet\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044291 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-bin\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044326 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-config\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044355 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-system-cni-dir\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044375 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-cnibin\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044394 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-etc-openvswitch\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044433 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-script-lib\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044482 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-systemd\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044516 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-ovn\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044534 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-log-socket\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044555 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-ovn-kubernetes\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044581 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovn-node-metrics-cert\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044610 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-node-log\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044635 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.044649 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-netd\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.050511 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.063809 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.083681 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.112797 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.133560 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145404 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-systemd\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145450 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-log-socket\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145495 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-ovn-kubernetes\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145522 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-ovn\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145542 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovn-node-metrics-cert\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145565 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-node-log\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145586 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145602 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-netd\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145628 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-systemd-units\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145626 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-systemd\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145650 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145743 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145779 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-var-lib-openvswitch\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145841 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-netns\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145861 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-ovn-kubernetes\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145870 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-openvswitch\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145889 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-ovn\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145897 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v7pr\" (UniqueName: \"kubernetes.io/projected/788e0f44-29c3-4c4a-afe9-33c26a965d74-kube-api-access-4v7pr\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145976 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-slash\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146038 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wbdb\" (UniqueName: \"kubernetes.io/projected/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-kube-api-access-4wbdb\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146065 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-env-overrides\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146128 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-os-release\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146152 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146178 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146207 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-kubelet\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146227 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-bin\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146248 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-config\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146274 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-script-lib\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146307 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-system-cni-dir\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146335 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-cnibin\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146363 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-etc-openvswitch\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146453 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-etc-openvswitch\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146469 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-node-log\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146561 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-openvswitch\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.145788 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-log-socket\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146560 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-netns\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146595 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-systemd-units\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146602 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-netd\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146651 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-kubelet\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146649 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146692 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-slash\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146710 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-bin\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.146490 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-var-lib-openvswitch\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.147272 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-os-release\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.147306 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-system-cni-dir\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.147496 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-cnibin\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.147846 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.148045 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-script-lib\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.148291 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.148514 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-config\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.148664 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-env-overrides\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.151940 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.152428 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovn-node-metrics-cert\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.165681 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wbdb\" (UniqueName: \"kubernetes.io/projected/9e658f8f-1b88-4076-92a9-dd1ebeca6bd8-kube-api-access-4wbdb\") pod \"multus-additional-cni-plugins-95tx4\" (UID: \"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\") " pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.167837 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.171908 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v7pr\" (UniqueName: \"kubernetes.io/projected/788e0f44-29c3-4c4a-afe9-33c26a965d74-kube-api-access-4v7pr\") pod \"ovnkube-node-6fmfz\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.181971 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.196646 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.210653 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.227098 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.242404 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.334173 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-95tx4" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.340014 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:08 crc kubenswrapper[5034]: W0105 21:52:08.354781 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod788e0f44_29c3_4c4a_afe9_33c26a965d74.slice/crio-a23c07583da8fc9983e5ecda000bce5feb53f433e482d48f0c480004ca259d7b WatchSource:0}: Error finding container a23c07583da8fc9983e5ecda000bce5feb53f433e482d48f0c480004ca259d7b: Status 404 returned error can't find the container with id a23c07583da8fc9983e5ecda000bce5feb53f433e482d48f0c480004ca259d7b Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.449759 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.449964 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:52:10.449948883 +0000 UTC m=+22.821948322 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.536381 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.550332 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.550373 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.550405 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.550463 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550478 5034 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550543 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:10.550523703 +0000 UTC m=+22.922523152 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550571 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550585 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550596 5034 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550636 5034 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550683 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550715 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550726 5034 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550653 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:10.550626666 +0000 UTC m=+22.922626105 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550782 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:10.55076602 +0000 UTC m=+22.922765459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.550802 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:10.550795011 +0000 UTC m=+22.922794450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.651756 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.712061 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.758144 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.786383 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.795753 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.838177 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.838225 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.838263 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.838297 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.838407 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:08 crc kubenswrapper[5034]: E0105 21:52:08.838499 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.859981 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.898256 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.899071 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.966680 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hzbjx" event={"ID":"33d2b819-50a6-427b-8503-a87d0fafc058","Type":"ContainerStarted","Data":"36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92"} Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.968940 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843"} Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.968977 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2"} Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.969660 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.969777 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"a23c07583da8fc9983e5ecda000bce5feb53f433e482d48f0c480004ca259d7b"} Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.970768 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" event={"ID":"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8","Type":"ContainerStarted","Data":"bda6e5d90b8f63e1ba3336e701fee6c9b5a5f5bbb7fd6e114d3264773905bc66"} Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.971905 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tsch6" event={"ID":"691cc76e-ed89-4547-9bb1-58b03c8f7932","Type":"ContainerStarted","Data":"23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740"} Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.971962 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tsch6" event={"ID":"691cc76e-ed89-4547-9bb1-58b03c8f7932","Type":"ContainerStarted","Data":"fc046c1a83a7f98b84b733d8dafbb8b7041fc455712f4f392dbf9901d65b33ce"} Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.979164 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.982148 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 05 21:52:08 crc kubenswrapper[5034]: I0105 21:52:08.989720 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.002184 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.012660 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.023251 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.032290 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.043647 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.053776 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.064918 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.074508 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.087132 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.094301 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.107209 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.121935 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.125732 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.136457 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.149178 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.154963 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.162829 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.177417 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.197117 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.210886 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.217267 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.222479 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.232887 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.235415 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.238712 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.238867 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.248502 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.253414 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.271650 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.279228 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.285262 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.303318 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.308660 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.335935 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.339047 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.344857 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.367297 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.405722 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.446339 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.485731 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.527192 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.569853 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.604732 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.645216 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.686838 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.722740 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.765385 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.808652 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.849960 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.888879 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.926091 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.968001 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:09Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.976304 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a" exitCode=0 Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.976362 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a"} Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.977810 5034 generic.go:334] "Generic (PLEG): container finished" podID="9e658f8f-1b88-4076-92a9-dd1ebeca6bd8" containerID="6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184" exitCode=0 Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.977869 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" event={"ID":"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8","Type":"ContainerDied","Data":"6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184"} Jan 05 21:52:09 crc kubenswrapper[5034]: I0105 21:52:09.979152 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41"} Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.001985 5034 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.026088 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.067405 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.113791 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.177885 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.187190 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.233771 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.267907 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.309125 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.343853 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.387250 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.428800 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.469482 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.469716 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:52:14.469695616 +0000 UTC m=+26.841695055 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.475855 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.489489 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.516401 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.538551 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.543826 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.563572 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.570952 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.570991 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.571013 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.571031 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571159 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571163 5034 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571209 5034 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571172 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571250 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:14.571232033 +0000 UTC m=+26.943231472 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571267 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571270 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:14.571260814 +0000 UTC m=+26.943260253 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571277 5034 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571328 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:14.571319856 +0000 UTC m=+26.943319295 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571174 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571345 5034 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.571369 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:14.571364147 +0000 UTC m=+26.943363586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.608439 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.656916 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.664774 5034 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.669654 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.669723 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.669738 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.669910 5034 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.688234 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.739527 5034 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.739778 5034 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.741374 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.741395 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.741405 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.741421 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.741430 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:10Z","lastTransitionTime":"2026-01-05T21:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.759807 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.764329 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.764379 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.764396 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.764416 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.764429 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:10Z","lastTransitionTime":"2026-01-05T21:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.769479 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.776329 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.779336 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.779361 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.779372 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.779579 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.779588 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:10Z","lastTransitionTime":"2026-01-05T21:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.792016 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.795762 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.795793 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.795806 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.795820 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.795832 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:10Z","lastTransitionTime":"2026-01-05T21:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.808208 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.811949 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.813286 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.813315 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.813327 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.813357 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.813369 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:10Z","lastTransitionTime":"2026-01-05T21:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.827891 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.828006 5034 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.829797 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.829856 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.829870 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.829893 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.829906 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:10Z","lastTransitionTime":"2026-01-05T21:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.837412 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.837560 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.837906 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.837983 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.838063 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:10 crc kubenswrapper[5034]: E0105 21:52:10.838152 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.848218 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.886924 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.921602 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lf4h2"] Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.922060 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.927288 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:10Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.932462 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.932502 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.932514 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.932531 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.932543 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:10Z","lastTransitionTime":"2026-01-05T21:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.937882 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.958168 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.975797 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66c7fd4b-e058-43d1-9ffe-c0e35978e0ab-host\") pod \"node-ca-lf4h2\" (UID: \"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\") " pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.975871 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66c7fd4b-e058-43d1-9ffe-c0e35978e0ab-serviceca\") pod \"node-ca-lf4h2\" (UID: \"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\") " pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.975920 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d8hg\" (UniqueName: \"kubernetes.io/projected/66c7fd4b-e058-43d1-9ffe-c0e35978e0ab-kube-api-access-8d8hg\") pod \"node-ca-lf4h2\" (UID: \"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\") " pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.977040 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.983743 5034 generic.go:334] "Generic (PLEG): container finished" podID="9e658f8f-1b88-4076-92a9-dd1ebeca6bd8" containerID="988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4" exitCode=0 Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.983781 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" event={"ID":"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8","Type":"ContainerDied","Data":"988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4"} Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.988051 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032"} Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.988172 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288"} Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.988254 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f"} Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.988322 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4"} Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.988408 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6"} Jan 05 21:52:10 crc kubenswrapper[5034]: I0105 21:52:10.988502 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.002362 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.034819 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.034862 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.034875 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.034890 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.034900 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:11Z","lastTransitionTime":"2026-01-05T21:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.049488 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.077456 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66c7fd4b-e058-43d1-9ffe-c0e35978e0ab-host\") pod \"node-ca-lf4h2\" (UID: \"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\") " pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.077546 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66c7fd4b-e058-43d1-9ffe-c0e35978e0ab-serviceca\") pod \"node-ca-lf4h2\" (UID: \"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\") " pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.077645 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d8hg\" (UniqueName: \"kubernetes.io/projected/66c7fd4b-e058-43d1-9ffe-c0e35978e0ab-kube-api-access-8d8hg\") pod \"node-ca-lf4h2\" (UID: \"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\") " pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.078104 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66c7fd4b-e058-43d1-9ffe-c0e35978e0ab-host\") pod \"node-ca-lf4h2\" (UID: \"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\") " pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.078907 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66c7fd4b-e058-43d1-9ffe-c0e35978e0ab-serviceca\") pod \"node-ca-lf4h2\" (UID: \"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\") " pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.084890 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.114837 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d8hg\" (UniqueName: \"kubernetes.io/projected/66c7fd4b-e058-43d1-9ffe-c0e35978e0ab-kube-api-access-8d8hg\") pod \"node-ca-lf4h2\" (UID: \"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\") " pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.137906 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.137944 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.137953 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.137968 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.137980 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:11Z","lastTransitionTime":"2026-01-05T21:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.143383 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.187455 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.225315 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.237891 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lf4h2" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.239692 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.239731 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.239741 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.239755 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.239766 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:11Z","lastTransitionTime":"2026-01-05T21:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.272634 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: W0105 21:52:11.279140 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66c7fd4b_e058_43d1_9ffe_c0e35978e0ab.slice/crio-f1078353b4881fa2eacf2264a24d2fdaeb85f95878b387f850c02080be9bb07f WatchSource:0}: Error finding container f1078353b4881fa2eacf2264a24d2fdaeb85f95878b387f850c02080be9bb07f: Status 404 returned error can't find the container with id f1078353b4881fa2eacf2264a24d2fdaeb85f95878b387f850c02080be9bb07f Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.307215 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.342488 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.342525 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.342533 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.342547 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.342556 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:11Z","lastTransitionTime":"2026-01-05T21:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.345661 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.391069 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.423670 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.444728 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.444768 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.444776 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.444790 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.444799 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:11Z","lastTransitionTime":"2026-01-05T21:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.468026 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.505034 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.545599 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.547676 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.547709 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.547718 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.547732 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.547740 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:11Z","lastTransitionTime":"2026-01-05T21:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.586292 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.624531 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.649964 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.650013 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.650037 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.650343 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.650386 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:11Z","lastTransitionTime":"2026-01-05T21:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.667272 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.705653 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.746821 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.752198 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.752220 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.752230 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.752244 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.752254 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:11Z","lastTransitionTime":"2026-01-05T21:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.786961 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.830126 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.854993 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.855036 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.855047 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.855062 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.855070 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:11Z","lastTransitionTime":"2026-01-05T21:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.865126 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.905807 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.948841 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.958926 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.958978 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.958993 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.959011 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.959045 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:11Z","lastTransitionTime":"2026-01-05T21:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.991954 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:11Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.995140 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lf4h2" event={"ID":"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab","Type":"ContainerStarted","Data":"50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e"} Jan 05 21:52:11 crc kubenswrapper[5034]: I0105 21:52:11.995247 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lf4h2" event={"ID":"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab","Type":"ContainerStarted","Data":"f1078353b4881fa2eacf2264a24d2fdaeb85f95878b387f850c02080be9bb07f"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.001866 5034 generic.go:334] "Generic (PLEG): container finished" podID="9e658f8f-1b88-4076-92a9-dd1ebeca6bd8" containerID="8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5" exitCode=0 Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.001942 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" event={"ID":"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8","Type":"ContainerDied","Data":"8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.030917 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.061685 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.061724 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.061733 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.061748 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.061758 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:12Z","lastTransitionTime":"2026-01-05T21:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.074292 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.113575 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.148835 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.164733 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.164781 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.164795 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.164819 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.164835 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:12Z","lastTransitionTime":"2026-01-05T21:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.186747 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.228764 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.268303 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.268348 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.268358 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.268372 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.268383 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:12Z","lastTransitionTime":"2026-01-05T21:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.271145 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.307044 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.350554 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.370761 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.370808 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.370821 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.370838 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.370848 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:12Z","lastTransitionTime":"2026-01-05T21:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.394884 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.429360 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.466543 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.475263 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.475328 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.475348 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.475375 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.475398 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:12Z","lastTransitionTime":"2026-01-05T21:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.511101 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.544842 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.578055 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.578106 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.578127 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.578143 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.578153 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:12Z","lastTransitionTime":"2026-01-05T21:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.585742 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.625366 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.667645 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.680369 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.680411 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.680422 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.680435 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.680444 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:12Z","lastTransitionTime":"2026-01-05T21:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.706237 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.745671 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.782948 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.782984 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.782994 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.783011 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.783022 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:12Z","lastTransitionTime":"2026-01-05T21:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.791970 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.824437 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.837758 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.837767 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:12 crc kubenswrapper[5034]: E0105 21:52:12.837868 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.837930 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:12 crc kubenswrapper[5034]: E0105 21:52:12.838006 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:12 crc kubenswrapper[5034]: E0105 21:52:12.838057 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.886226 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.886264 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.886273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.886288 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.886297 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:12Z","lastTransitionTime":"2026-01-05T21:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.988638 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.988672 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.988683 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.988700 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:12 crc kubenswrapper[5034]: I0105 21:52:12.988711 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:12Z","lastTransitionTime":"2026-01-05T21:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.008156 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.010590 5034 generic.go:334] "Generic (PLEG): container finished" podID="9e658f8f-1b88-4076-92a9-dd1ebeca6bd8" containerID="310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26" exitCode=0 Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.010633 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" event={"ID":"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8","Type":"ContainerDied","Data":"310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.032770 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.046672 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.059608 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.071595 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.082621 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.092171 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.092213 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.092222 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.092236 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.092246 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:13Z","lastTransitionTime":"2026-01-05T21:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.094518 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.105884 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.147298 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.190846 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.194482 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.194509 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.194518 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.194532 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.194543 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:13Z","lastTransitionTime":"2026-01-05T21:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.225556 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.264312 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.296564 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.296605 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.296614 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.296629 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.296638 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:13Z","lastTransitionTime":"2026-01-05T21:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.306624 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.347869 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.385665 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.398918 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.398952 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.398962 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.398979 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.398990 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:13Z","lastTransitionTime":"2026-01-05T21:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.425955 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:13Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.502097 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.502140 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.502152 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.502179 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.502189 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:13Z","lastTransitionTime":"2026-01-05T21:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.604454 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.604488 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.604517 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.604532 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.604542 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:13Z","lastTransitionTime":"2026-01-05T21:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.706503 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.706555 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.706567 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.706582 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.706590 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:13Z","lastTransitionTime":"2026-01-05T21:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.809153 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.809194 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.809205 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.809219 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.809230 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:13Z","lastTransitionTime":"2026-01-05T21:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.911897 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.911923 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.911931 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.911944 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:13 crc kubenswrapper[5034]: I0105 21:52:13.911952 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:13Z","lastTransitionTime":"2026-01-05T21:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.014343 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.014377 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.014385 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.014399 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.014409 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:14Z","lastTransitionTime":"2026-01-05T21:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.016049 5034 generic.go:334] "Generic (PLEG): container finished" podID="9e658f8f-1b88-4076-92a9-dd1ebeca6bd8" containerID="606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e" exitCode=0 Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.016088 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" event={"ID":"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8","Type":"ContainerDied","Data":"606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.031347 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.042732 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.054058 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.073941 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.085031 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.097180 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.106359 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.117038 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.117092 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.117105 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.117122 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.117133 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:14Z","lastTransitionTime":"2026-01-05T21:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.118899 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.131878 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.143001 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.152159 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.169019 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.181679 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.194417 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.208040 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:14Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.219811 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.219831 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.219839 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.219852 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.219862 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:14Z","lastTransitionTime":"2026-01-05T21:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.321656 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.321922 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.322012 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.322105 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.322175 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:14Z","lastTransitionTime":"2026-01-05T21:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.424797 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.424985 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.425071 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.425157 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.425223 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:14Z","lastTransitionTime":"2026-01-05T21:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.516459 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.516784 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:52:22.516756004 +0000 UTC m=+34.888755433 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.527796 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.527819 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.527826 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.527838 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.527847 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:14Z","lastTransitionTime":"2026-01-05T21:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.617880 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.617924 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.617955 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.617973 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618056 5034 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618132 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:22.618118526 +0000 UTC m=+34.990117965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618265 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618287 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618297 5034 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618325 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:22.618317552 +0000 UTC m=+34.990316991 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618370 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618388 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618400 5034 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618424 5034 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618433 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:22.618422945 +0000 UTC m=+34.990422384 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.618446 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:22.618439805 +0000 UTC m=+34.990439244 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.629228 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.629250 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.629258 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.629272 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.629281 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:14Z","lastTransitionTime":"2026-01-05T21:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.731661 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.731690 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.731699 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.731712 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.731720 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:14Z","lastTransitionTime":"2026-01-05T21:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.833640 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.833868 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.833945 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.834021 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.834121 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:14Z","lastTransitionTime":"2026-01-05T21:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.837828 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.837969 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.838120 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.838204 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.838293 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:14 crc kubenswrapper[5034]: E0105 21:52:14.838401 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.936830 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.936875 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.936884 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.936902 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:14 crc kubenswrapper[5034]: I0105 21:52:14.936913 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:14Z","lastTransitionTime":"2026-01-05T21:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.021782 5034 generic.go:334] "Generic (PLEG): container finished" podID="9e658f8f-1b88-4076-92a9-dd1ebeca6bd8" containerID="d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9" exitCode=0 Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.021828 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" event={"ID":"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8","Type":"ContainerDied","Data":"d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.034459 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.038528 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.038565 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.038578 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.038595 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.038606 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:15Z","lastTransitionTime":"2026-01-05T21:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.048017 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.065583 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.075506 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.086013 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.095768 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.109350 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.123719 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.139945 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.140900 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.140938 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.140947 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.140961 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.140971 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:15Z","lastTransitionTime":"2026-01-05T21:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.150201 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.159521 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.172707 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.184537 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.194667 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.211583 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.242719 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.242762 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.242791 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.242810 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.242824 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:15Z","lastTransitionTime":"2026-01-05T21:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.345006 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.345038 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.345048 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.345063 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.345090 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:15Z","lastTransitionTime":"2026-01-05T21:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.447679 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.447712 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.447723 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.447740 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.447750 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:15Z","lastTransitionTime":"2026-01-05T21:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.549762 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.549803 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.549818 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.549833 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.549845 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:15Z","lastTransitionTime":"2026-01-05T21:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.652451 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.652488 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.652496 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.652508 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.652517 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:15Z","lastTransitionTime":"2026-01-05T21:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.755027 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.755295 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.755308 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.755324 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.755337 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:15Z","lastTransitionTime":"2026-01-05T21:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.857227 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.857259 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.857266 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.857279 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.857288 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:15Z","lastTransitionTime":"2026-01-05T21:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.959194 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.959222 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.959229 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.959242 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:15 crc kubenswrapper[5034]: I0105 21:52:15.959250 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:15Z","lastTransitionTime":"2026-01-05T21:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.029303 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.029556 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.035600 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" event={"ID":"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8","Type":"ContainerStarted","Data":"95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.043552 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.054888 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.061424 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.062884 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.062914 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.062922 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.062934 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.062944 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:16Z","lastTransitionTime":"2026-01-05T21:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.071816 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.090140 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.131206 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.157801 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.165442 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.165491 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.165504 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.165522 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.165532 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:16Z","lastTransitionTime":"2026-01-05T21:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.177312 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.186665 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.197489 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.216573 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.226294 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.237113 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.247715 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.260665 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.267740 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.267766 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.267775 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.267788 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.267798 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:16Z","lastTransitionTime":"2026-01-05T21:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.275424 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.287892 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.298415 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.308761 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.324516 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.334379 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.345642 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.356883 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.370457 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.370496 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.370519 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.370534 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.370543 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:16Z","lastTransitionTime":"2026-01-05T21:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.371834 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.383575 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.394288 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.404833 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.424404 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.438577 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.449105 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.460359 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.472831 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.472973 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.473054 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.473152 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.473278 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:16Z","lastTransitionTime":"2026-01-05T21:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.577270 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.577303 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.577311 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.577325 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.577352 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:16Z","lastTransitionTime":"2026-01-05T21:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.679976 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.680013 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.680021 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.680038 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.680046 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:16Z","lastTransitionTime":"2026-01-05T21:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.782563 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.782594 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.782603 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.782616 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.782625 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:16Z","lastTransitionTime":"2026-01-05T21:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.837732 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.837774 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:16 crc kubenswrapper[5034]: E0105 21:52:16.837838 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.837774 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:16 crc kubenswrapper[5034]: E0105 21:52:16.837896 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:16 crc kubenswrapper[5034]: E0105 21:52:16.837973 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.884794 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.884875 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.884889 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.884907 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.884918 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:16Z","lastTransitionTime":"2026-01-05T21:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.986782 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.986810 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.986818 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.986830 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:16 crc kubenswrapper[5034]: I0105 21:52:16.986839 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:16Z","lastTransitionTime":"2026-01-05T21:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.039155 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.039627 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.062340 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.076095 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.087145 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.088704 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.088743 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.088753 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.088774 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.088790 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:17Z","lastTransitionTime":"2026-01-05T21:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.106163 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.128713 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.139779 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.151321 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.166608 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.182750 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.190832 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.190863 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.190872 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.190885 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.190895 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:17Z","lastTransitionTime":"2026-01-05T21:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.196372 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.217110 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.226984 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.244567 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.257113 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.268378 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.278849 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.292587 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.292624 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.292635 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.292651 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.292662 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:17Z","lastTransitionTime":"2026-01-05T21:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.395071 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.395143 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.395155 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.395174 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.395191 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:17Z","lastTransitionTime":"2026-01-05T21:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.497336 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.497373 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.497383 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.497396 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.497407 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:17Z","lastTransitionTime":"2026-01-05T21:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.599714 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.599750 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.599760 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.599774 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.599784 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:17Z","lastTransitionTime":"2026-01-05T21:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.702193 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.702227 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.702235 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.702250 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.702263 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:17Z","lastTransitionTime":"2026-01-05T21:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.804516 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.804553 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.804565 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.804580 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.804593 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:17Z","lastTransitionTime":"2026-01-05T21:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.851812 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.866744 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.876450 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.894283 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.907291 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.907339 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.907352 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.907368 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.907381 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:17Z","lastTransitionTime":"2026-01-05T21:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.908276 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.918684 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.928645 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.941507 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.958536 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.975262 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:17 crc kubenswrapper[5034]: I0105 21:52:17.995207 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.011065 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.011484 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.011535 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.011549 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.011572 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.011589 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:18Z","lastTransitionTime":"2026-01-05T21:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.023458 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.034811 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.045246 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/0.log" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.047285 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9" exitCode=1 Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.047342 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9"} Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.048722 5034 scope.go:117] "RemoveContainer" containerID="a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.049318 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.063758 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.082518 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.097056 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.109121 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.114745 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.114800 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.114816 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.114836 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.114851 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:18Z","lastTransitionTime":"2026-01-05T21:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.124938 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.135494 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.161810 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.178046 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.191285 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.205725 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.217611 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.217664 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.217675 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.217689 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.217724 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:18Z","lastTransitionTime":"2026-01-05T21:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.219499 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.233103 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.246024 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.267313 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:17Z\\\",\\\"message\\\":\\\"/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:52:17.310365 6329 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:52:17.310393 6329 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:52:17.310496 6329 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:52:17.310917 6329 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:52:17.310938 6329 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0105 21:52:17.310943 6329 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0105 21:52:17.310967 6329 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:52:17.311002 6329 factory.go:656] Stopping watch factory\\\\nI0105 21:52:17.311019 6329 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:52:17.311031 6329 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:52:17.311044 6329 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:52:17.311051 6329 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.278508 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:18Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.320285 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.320314 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.320322 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.320336 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.320344 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:18Z","lastTransitionTime":"2026-01-05T21:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.423652 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.423677 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.423686 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.423699 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.423707 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:18Z","lastTransitionTime":"2026-01-05T21:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.525406 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.525431 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.525438 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.525452 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.525460 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:18Z","lastTransitionTime":"2026-01-05T21:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.627296 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.627347 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.627359 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.627377 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.627390 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:18Z","lastTransitionTime":"2026-01-05T21:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.729897 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.729979 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.729994 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.730030 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.730060 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:18Z","lastTransitionTime":"2026-01-05T21:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.833461 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.833495 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.833503 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.833515 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.833523 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:18Z","lastTransitionTime":"2026-01-05T21:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.837711 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:18 crc kubenswrapper[5034]: E0105 21:52:18.838045 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.837778 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.837765 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:18 crc kubenswrapper[5034]: E0105 21:52:18.838131 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:18 crc kubenswrapper[5034]: E0105 21:52:18.838182 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.936029 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.936067 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.936094 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.936109 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:18 crc kubenswrapper[5034]: I0105 21:52:18.936120 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:18Z","lastTransitionTime":"2026-01-05T21:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.038828 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.038870 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.038880 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.038896 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.038907 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:19Z","lastTransitionTime":"2026-01-05T21:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.052620 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/0.log" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.054963 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.055059 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.068418 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.078477 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.088636 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.109386 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.122261 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.132694 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.141330 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.141371 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.141388 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.141408 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.141424 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:19Z","lastTransitionTime":"2026-01-05T21:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.145386 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.156907 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.167801 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.178775 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.194287 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:17Z\\\",\\\"message\\\":\\\"/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:52:17.310365 6329 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:52:17.310393 6329 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:52:17.310496 6329 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:52:17.310917 6329 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:52:17.310938 6329 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0105 21:52:17.310943 6329 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0105 21:52:17.310967 6329 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:52:17.311002 6329 factory.go:656] Stopping watch factory\\\\nI0105 21:52:17.311019 6329 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:52:17.311031 6329 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:52:17.311044 6329 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:52:17.311051 6329 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.203221 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.214497 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.224130 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.237313 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.243822 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.243876 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.243891 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.243907 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.243919 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:19Z","lastTransitionTime":"2026-01-05T21:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.346834 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.346872 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.346881 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.346896 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.346905 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:19Z","lastTransitionTime":"2026-01-05T21:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.448908 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.448947 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.448957 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.448971 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.448980 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:19Z","lastTransitionTime":"2026-01-05T21:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.551178 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.551222 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.551234 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.551248 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.551258 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:19Z","lastTransitionTime":"2026-01-05T21:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.653155 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.653191 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.653199 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.653214 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.653223 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:19Z","lastTransitionTime":"2026-01-05T21:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.756069 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.756133 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.756143 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.756158 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.756170 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:19Z","lastTransitionTime":"2026-01-05T21:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.857863 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.857902 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.857913 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.857927 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.857938 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:19Z","lastTransitionTime":"2026-01-05T21:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.959322 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw"] Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.959914 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.960437 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.960477 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.960487 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.960503 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.960514 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:19Z","lastTransitionTime":"2026-01-05T21:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.962505 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.962600 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.976150 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:19 crc kubenswrapper[5034]: I0105 21:52:19.986536 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:19Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.001644 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.012621 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.023303 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.035160 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.045981 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.059578 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/1.log" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.060140 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/0.log" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.061814 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.061876 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.061942 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.061967 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.061977 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:20Z","lastTransitionTime":"2026-01-05T21:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.062772 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759" exitCode=1 Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.062802 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759"} Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.062844 5034 scope.go:117] "RemoveContainer" containerID="a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.063589 5034 scope.go:117] "RemoveContainer" containerID="f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759" Jan 05 21:52:20 crc kubenswrapper[5034]: E0105 21:52:20.063944 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.065099 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.065475 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da6626bb-3c1d-4149-911b-32b988ab216c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.065505 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da6626bb-3c1d-4149-911b-32b988ab216c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.065531 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da6626bb-3c1d-4149-911b-32b988ab216c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.065604 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jchpb\" (UniqueName: \"kubernetes.io/projected/da6626bb-3c1d-4149-911b-32b988ab216c-kube-api-access-jchpb\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.079343 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.093300 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.106241 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.117317 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.125364 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.135490 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.154113 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:17Z\\\",\\\"message\\\":\\\"/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:52:17.310365 6329 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:52:17.310393 6329 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:52:17.310496 6329 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:52:17.310917 6329 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:52:17.310938 6329 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0105 21:52:17.310943 6329 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0105 21:52:17.310967 6329 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:52:17.311002 6329 factory.go:656] Stopping watch factory\\\\nI0105 21:52:17.311019 6329 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:52:17.311031 6329 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:52:17.311044 6329 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:52:17.311051 6329 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.163095 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.164093 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.164133 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.164145 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.164161 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.164172 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:20Z","lastTransitionTime":"2026-01-05T21:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.166470 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jchpb\" (UniqueName: \"kubernetes.io/projected/da6626bb-3c1d-4149-911b-32b988ab216c-kube-api-access-jchpb\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.166540 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da6626bb-3c1d-4149-911b-32b988ab216c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.166568 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da6626bb-3c1d-4149-911b-32b988ab216c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.166596 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da6626bb-3c1d-4149-911b-32b988ab216c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.167320 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da6626bb-3c1d-4149-911b-32b988ab216c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.167437 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da6626bb-3c1d-4149-911b-32b988ab216c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.177633 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da6626bb-3c1d-4149-911b-32b988ab216c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.180373 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.181921 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jchpb\" (UniqueName: \"kubernetes.io/projected/da6626bb-3c1d-4149-911b-32b988ab216c-kube-api-access-jchpb\") pod \"ovnkube-control-plane-749d76644c-l4hpw\" (UID: \"da6626bb-3c1d-4149-911b-32b988ab216c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.190755 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.201938 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.211944 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.221733 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.232117 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.240188 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.255538 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.265893 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.266702 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.266739 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.266751 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.266777 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.266788 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:20Z","lastTransitionTime":"2026-01-05T21:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.276418 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.276677 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.287256 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: W0105 21:52:20.288426 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda6626bb_3c1d_4149_911b_32b988ab216c.slice/crio-f83dd5b49748ae1ea868faacb1b652517290aa33ac78c2fd0712bbc4fdb5b135 WatchSource:0}: Error finding container f83dd5b49748ae1ea868faacb1b652517290aa33ac78c2fd0712bbc4fdb5b135: Status 404 returned error can't find the container with id f83dd5b49748ae1ea868faacb1b652517290aa33ac78c2fd0712bbc4fdb5b135 Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.300611 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.311567 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.322515 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.346984 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a721ce83f3ad20fd7801cd71d68c6384ebb469f9d989a8d7939b5da2997a02f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:17Z\\\",\\\"message\\\":\\\"/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:52:17.310365 6329 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:52:17.310393 6329 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:52:17.310496 6329 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:52:17.310917 6329 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:52:17.310938 6329 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0105 21:52:17.310943 6329 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0105 21:52:17.310967 6329 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:52:17.311002 6329 factory.go:656] Stopping watch factory\\\\nI0105 21:52:17.311019 6329 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:52:17.311031 6329 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:52:17.311044 6329 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:52:17.311051 6329 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"ift-kube-apiserver/kube-apiserver-crc\\\\nI0105 21:52:18.899068 6455 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-95tx4 after 0 failed attempt(s)\\\\nI0105 21:52:18.899134 6455 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-95tx4\\\\nI0105 21:52:18.899144 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:18.899146 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0105 21:52:18.899142 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.355802 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:20Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.369750 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.369810 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.369839 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.369852 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.369864 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:20Z","lastTransitionTime":"2026-01-05T21:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.471762 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.471802 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.471815 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.471831 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.471840 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:20Z","lastTransitionTime":"2026-01-05T21:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.574375 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.574432 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.574470 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.574510 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.574523 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:20Z","lastTransitionTime":"2026-01-05T21:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.676538 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.676589 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.676599 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.676614 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.676623 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:20Z","lastTransitionTime":"2026-01-05T21:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.779283 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.779318 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.779331 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.779347 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.779356 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:20Z","lastTransitionTime":"2026-01-05T21:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.838177 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.838253 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:20 crc kubenswrapper[5034]: E0105 21:52:20.838319 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:20 crc kubenswrapper[5034]: E0105 21:52:20.838448 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.838283 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:20 crc kubenswrapper[5034]: E0105 21:52:20.838535 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.881273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.881320 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.881329 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.881343 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.881352 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:20Z","lastTransitionTime":"2026-01-05T21:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.983889 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.983937 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.983951 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.983970 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:20 crc kubenswrapper[5034]: I0105 21:52:20.983982 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:20Z","lastTransitionTime":"2026-01-05T21:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.067068 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/1.log" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.070532 5034 scope.go:117] "RemoveContainer" containerID="f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759" Jan 05 21:52:21 crc kubenswrapper[5034]: E0105 21:52:21.070680 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.072090 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" event={"ID":"da6626bb-3c1d-4149-911b-32b988ab216c","Type":"ContainerStarted","Data":"5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.072118 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" event={"ID":"da6626bb-3c1d-4149-911b-32b988ab216c","Type":"ContainerStarted","Data":"a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.072128 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" event={"ID":"da6626bb-3c1d-4149-911b-32b988ab216c","Type":"ContainerStarted","Data":"f83dd5b49748ae1ea868faacb1b652517290aa33ac78c2fd0712bbc4fdb5b135"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.083468 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.083516 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.083526 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.083543 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.083553 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.093276 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: E0105 21:52:21.101980 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.103376 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.105813 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.105861 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.105878 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.105900 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.105918 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.113326 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: E0105 21:52:21.117466 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.120451 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.120489 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.120498 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.120513 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.120524 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.130206 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"ift-kube-apiserver/kube-apiserver-crc\\\\nI0105 21:52:18.899068 6455 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-95tx4 after 0 failed attempt(s)\\\\nI0105 21:52:18.899134 6455 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-95tx4\\\\nI0105 21:52:18.899144 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:18.899146 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0105 21:52:18.899142 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: E0105 21:52:21.130641 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.133377 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.133414 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.133428 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.133441 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.133451 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.141096 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: E0105 21:52:21.143113 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.146510 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.146549 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.146584 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.146602 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.146614 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.152012 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: E0105 21:52:21.158066 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: E0105 21:52:21.158199 5034 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.159540 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.159584 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.159594 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.159610 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.159647 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.161589 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.174981 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.188315 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.198467 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.207484 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.214608 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.229818 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.241638 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.253640 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.262227 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.262459 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.262471 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.262486 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.262497 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.264956 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.276478 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.285658 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.297962 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.312752 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"ift-kube-apiserver/kube-apiserver-crc\\\\nI0105 21:52:18.899068 6455 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-95tx4 after 0 failed attempt(s)\\\\nI0105 21:52:18.899134 6455 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-95tx4\\\\nI0105 21:52:18.899144 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:18.899146 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0105 21:52:18.899142 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.321250 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.330865 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.339637 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.351186 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.359471 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.364619 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.364650 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.364665 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.364680 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.364690 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.369847 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.378929 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.386768 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.402604 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-99zr4"] Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.403142 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:21 crc kubenswrapper[5034]: E0105 21:52:21.403800 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.405648 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.420462 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.431281 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.441923 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.452299 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.462646 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.469043 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.469072 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.469097 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.469112 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.469123 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.479574 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.479715 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.479765 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlhvd\" (UniqueName: \"kubernetes.io/projected/7949c792-bd35-4fb3-9235-402a13c61026-kube-api-access-xlhvd\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.495208 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.509132 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.538560 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"ift-kube-apiserver/kube-apiserver-crc\\\\nI0105 21:52:18.899068 6455 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-95tx4 after 0 failed attempt(s)\\\\nI0105 21:52:18.899134 6455 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-95tx4\\\\nI0105 21:52:18.899144 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:18.899146 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0105 21:52:18.899142 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.550270 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.561060 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.571038 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.571088 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.571101 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.571144 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.571157 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.580485 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.580523 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlhvd\" (UniqueName: \"kubernetes.io/projected/7949c792-bd35-4fb3-9235-402a13c61026-kube-api-access-xlhvd\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:21 crc kubenswrapper[5034]: E0105 21:52:21.580632 5034 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:21 crc kubenswrapper[5034]: E0105 21:52:21.580722 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs podName:7949c792-bd35-4fb3-9235-402a13c61026 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:22.080701019 +0000 UTC m=+34.452700458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs") pod "network-metrics-daemon-99zr4" (UID: "7949c792-bd35-4fb3-9235-402a13c61026") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.584246 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.607863 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlhvd\" (UniqueName: \"kubernetes.io/projected/7949c792-bd35-4fb3-9235-402a13c61026-kube-api-access-xlhvd\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.615005 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.647825 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.662213 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.673941 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.673985 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.674000 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.674023 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.674038 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.679990 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.693887 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.707208 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.721797 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.733167 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:21Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.776279 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.776323 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.776336 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.776353 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.776364 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.878769 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.878812 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.878822 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.878838 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.878847 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.980697 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.980741 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.980750 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.980766 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:21 crc kubenswrapper[5034]: I0105 21:52:21.980777 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:21Z","lastTransitionTime":"2026-01-05T21:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.082773 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.082857 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.082881 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.082916 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.082975 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:22Z","lastTransitionTime":"2026-01-05T21:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.085322 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.085538 5034 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.085665 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs podName:7949c792-bd35-4fb3-9235-402a13c61026 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:23.085628367 +0000 UTC m=+35.457627836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs") pod "network-metrics-daemon-99zr4" (UID: "7949c792-bd35-4fb3-9235-402a13c61026") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.185677 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.185743 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.185760 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.185779 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.185796 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:22Z","lastTransitionTime":"2026-01-05T21:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.287858 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.287913 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.287924 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.287944 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.287958 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:22Z","lastTransitionTime":"2026-01-05T21:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.389669 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.389723 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.389738 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.389759 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.389771 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:22Z","lastTransitionTime":"2026-01-05T21:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.491663 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.491708 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.491717 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.491733 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.491741 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:22Z","lastTransitionTime":"2026-01-05T21:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.590417 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.590549 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:52:38.590520654 +0000 UTC m=+50.962520103 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.593283 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.593321 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.593330 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.593345 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.593354 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:22Z","lastTransitionTime":"2026-01-05T21:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.691324 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.691365 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.691387 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.691406 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691426 5034 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691506 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:38.691485335 +0000 UTC m=+51.063484784 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691511 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691528 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691529 5034 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691609 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:38.691594638 +0000 UTC m=+51.063594077 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691538 5034 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691654 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:38.69164868 +0000 UTC m=+51.063648119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691550 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691673 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691681 5034 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.691698 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:38.691692871 +0000 UTC m=+51.063692310 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.695524 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.695548 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.695556 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.695569 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.695580 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:22Z","lastTransitionTime":"2026-01-05T21:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.798039 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.798071 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.798096 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.798110 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.798125 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:22Z","lastTransitionTime":"2026-01-05T21:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.837875 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.837977 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.838040 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.838137 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.838175 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.838272 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.838325 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:22 crc kubenswrapper[5034]: E0105 21:52:22.838478 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.900702 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.900745 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.900753 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.900770 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:22 crc kubenswrapper[5034]: I0105 21:52:22.900779 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:22Z","lastTransitionTime":"2026-01-05T21:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.002492 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.002554 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.002563 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.002653 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.002672 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:23Z","lastTransitionTime":"2026-01-05T21:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.095150 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:23 crc kubenswrapper[5034]: E0105 21:52:23.095299 5034 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:23 crc kubenswrapper[5034]: E0105 21:52:23.095422 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs podName:7949c792-bd35-4fb3-9235-402a13c61026 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:25.095385351 +0000 UTC m=+37.467384790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs") pod "network-metrics-daemon-99zr4" (UID: "7949c792-bd35-4fb3-9235-402a13c61026") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.104665 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.104696 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.104706 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.104718 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.104727 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:23Z","lastTransitionTime":"2026-01-05T21:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.207472 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.207514 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.207525 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.207540 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.207552 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:23Z","lastTransitionTime":"2026-01-05T21:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.310157 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.310189 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.310200 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.310214 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.310223 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:23Z","lastTransitionTime":"2026-01-05T21:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.412116 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.412142 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.412149 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.412161 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.412170 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:23Z","lastTransitionTime":"2026-01-05T21:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.514549 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.514579 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.514587 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.514598 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.514606 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:23Z","lastTransitionTime":"2026-01-05T21:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.617731 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.617767 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.617777 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.617789 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.617798 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:23Z","lastTransitionTime":"2026-01-05T21:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.720366 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.720401 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.720411 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.720429 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.720442 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:23Z","lastTransitionTime":"2026-01-05T21:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.825279 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.825337 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.825352 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.825374 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.825390 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:23Z","lastTransitionTime":"2026-01-05T21:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.928318 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.928926 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.929001 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.929126 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:23 crc kubenswrapper[5034]: I0105 21:52:23.929191 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:23Z","lastTransitionTime":"2026-01-05T21:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.031712 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.031753 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.031764 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.031779 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.031788 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:24Z","lastTransitionTime":"2026-01-05T21:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.133687 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.133765 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.133799 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.133830 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.133852 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:24Z","lastTransitionTime":"2026-01-05T21:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.236451 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.236486 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.236494 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.236509 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.236517 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:24Z","lastTransitionTime":"2026-01-05T21:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.339337 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.339375 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.339387 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.339405 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.339417 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:24Z","lastTransitionTime":"2026-01-05T21:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.442246 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.442308 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.442327 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.442350 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.442368 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:24Z","lastTransitionTime":"2026-01-05T21:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.545110 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.545165 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.545176 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.545192 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.545201 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:24Z","lastTransitionTime":"2026-01-05T21:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.647394 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.647445 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.647463 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.647482 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.647495 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:24Z","lastTransitionTime":"2026-01-05T21:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.749235 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.749273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.749284 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.749309 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.749319 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:24Z","lastTransitionTime":"2026-01-05T21:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.837777 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.837839 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.837794 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:24 crc kubenswrapper[5034]: E0105 21:52:24.837953 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.837983 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:24 crc kubenswrapper[5034]: E0105 21:52:24.838144 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:24 crc kubenswrapper[5034]: E0105 21:52:24.838272 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:24 crc kubenswrapper[5034]: E0105 21:52:24.838352 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.851572 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.851619 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.851633 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.851652 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.851666 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:24Z","lastTransitionTime":"2026-01-05T21:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.954967 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.955039 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.955063 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.955140 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:24 crc kubenswrapper[5034]: I0105 21:52:24.955167 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:24Z","lastTransitionTime":"2026-01-05T21:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.058261 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.058335 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.058361 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.058389 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.058410 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:25Z","lastTransitionTime":"2026-01-05T21:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.113857 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:25 crc kubenswrapper[5034]: E0105 21:52:25.114011 5034 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:25 crc kubenswrapper[5034]: E0105 21:52:25.114113 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs podName:7949c792-bd35-4fb3-9235-402a13c61026 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:29.114055504 +0000 UTC m=+41.486054943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs") pod "network-metrics-daemon-99zr4" (UID: "7949c792-bd35-4fb3-9235-402a13c61026") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.160475 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.160519 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.160542 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.160559 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.160572 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:25Z","lastTransitionTime":"2026-01-05T21:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.264985 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.265059 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.265113 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.265146 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.265167 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:25Z","lastTransitionTime":"2026-01-05T21:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.368198 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.368250 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.368268 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.368302 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.368336 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:25Z","lastTransitionTime":"2026-01-05T21:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.470716 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.470763 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.470775 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.470792 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.470803 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:25Z","lastTransitionTime":"2026-01-05T21:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.573191 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.573242 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.573259 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.573277 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.573288 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:25Z","lastTransitionTime":"2026-01-05T21:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.675506 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.675557 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.675569 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.675586 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.675617 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:25Z","lastTransitionTime":"2026-01-05T21:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.778285 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.778333 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.778345 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.778363 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.778375 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:25Z","lastTransitionTime":"2026-01-05T21:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.883038 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.883109 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.883126 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.883146 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.883165 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:25Z","lastTransitionTime":"2026-01-05T21:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.986044 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.986241 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.986259 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.986278 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:25 crc kubenswrapper[5034]: I0105 21:52:25.986292 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:25Z","lastTransitionTime":"2026-01-05T21:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.088438 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.088480 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.088489 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.088502 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.088511 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:26Z","lastTransitionTime":"2026-01-05T21:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.190492 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.190537 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.190549 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.190567 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.190580 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:26Z","lastTransitionTime":"2026-01-05T21:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.293493 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.293537 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.293547 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.293561 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.293572 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:26Z","lastTransitionTime":"2026-01-05T21:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.395460 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.395507 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.395519 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.395536 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.395548 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:26Z","lastTransitionTime":"2026-01-05T21:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.497721 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.497774 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.497782 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.497797 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.497807 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:26Z","lastTransitionTime":"2026-01-05T21:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.600276 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.600303 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.600313 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.600326 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.600334 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:26Z","lastTransitionTime":"2026-01-05T21:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.609008 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.626208 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.643484 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.657382 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.668016 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.678791 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.689767 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.701650 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.702421 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.702443 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.702452 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.702463 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.702472 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:26Z","lastTransitionTime":"2026-01-05T21:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.718165 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"ift-kube-apiserver/kube-apiserver-crc\\\\nI0105 21:52:18.899068 6455 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-95tx4 after 0 failed attempt(s)\\\\nI0105 21:52:18.899134 6455 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-95tx4\\\\nI0105 21:52:18.899144 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:18.899146 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0105 21:52:18.899142 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.727519 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.738022 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.750942 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.766005 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.786054 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.797120 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.804425 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.804462 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.804476 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.804496 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.804507 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:26Z","lastTransitionTime":"2026-01-05T21:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.808276 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.819645 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.827889 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.838282 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.838306 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.838321 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.838340 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:26 crc kubenswrapper[5034]: E0105 21:52:26.838412 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:26 crc kubenswrapper[5034]: E0105 21:52:26.838457 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:26 crc kubenswrapper[5034]: E0105 21:52:26.838500 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:26 crc kubenswrapper[5034]: E0105 21:52:26.838535 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.906222 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.906262 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.906272 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.906285 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:26 crc kubenswrapper[5034]: I0105 21:52:26.906294 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:26Z","lastTransitionTime":"2026-01-05T21:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.009194 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.009236 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.009248 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.009263 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.009275 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:27Z","lastTransitionTime":"2026-01-05T21:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.111529 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.111560 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.111570 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.111584 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.111595 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:27Z","lastTransitionTime":"2026-01-05T21:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.218723 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.218764 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.218776 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.218792 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.218808 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:27Z","lastTransitionTime":"2026-01-05T21:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.322499 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.322561 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.322577 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.322599 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.322616 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:27Z","lastTransitionTime":"2026-01-05T21:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.425870 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.425943 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.425968 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.426009 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.426034 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:27Z","lastTransitionTime":"2026-01-05T21:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.528913 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.529021 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.529042 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.529116 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.529137 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:27Z","lastTransitionTime":"2026-01-05T21:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.631688 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.631728 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.631744 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.631759 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.631770 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:27Z","lastTransitionTime":"2026-01-05T21:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.733948 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.733985 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.733995 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.734009 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.734018 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:27Z","lastTransitionTime":"2026-01-05T21:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.836714 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.836763 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.836775 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.836793 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.836807 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:27Z","lastTransitionTime":"2026-01-05T21:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.849027 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.858404 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.870398 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.880049 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.889939 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.899374 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.908004 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.928769 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.940453 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.940501 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.940511 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.940528 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.940537 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:27Z","lastTransitionTime":"2026-01-05T21:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.943492 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.957844 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.975281 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:27 crc kubenswrapper[5034]: I0105 21:52:27.991680 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.004163 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:28Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.019367 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:28Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.040309 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"ift-kube-apiserver/kube-apiserver-crc\\\\nI0105 21:52:18.899068 6455 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-95tx4 after 0 failed attempt(s)\\\\nI0105 21:52:18.899134 6455 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-95tx4\\\\nI0105 21:52:18.899144 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:18.899146 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0105 21:52:18.899142 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:28Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.043481 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.043560 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.043583 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.043619 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.043642 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:28Z","lastTransitionTime":"2026-01-05T21:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.056882 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:28Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.069688 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:28Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.146760 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.146812 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.146821 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.146836 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.146845 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:28Z","lastTransitionTime":"2026-01-05T21:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.250056 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.250330 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.250393 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.250495 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.250566 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:28Z","lastTransitionTime":"2026-01-05T21:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.352763 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.352802 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.352812 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.352828 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.352838 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:28Z","lastTransitionTime":"2026-01-05T21:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.455702 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.455765 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.455778 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.455792 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.455801 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:28Z","lastTransitionTime":"2026-01-05T21:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.557622 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.557673 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.557683 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.557697 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.557706 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:28Z","lastTransitionTime":"2026-01-05T21:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.659877 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.659920 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.659931 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.659947 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.659959 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:28Z","lastTransitionTime":"2026-01-05T21:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.762719 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.762759 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.762769 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.762783 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.762796 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:28Z","lastTransitionTime":"2026-01-05T21:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.837943 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.838001 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.838037 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:28 crc kubenswrapper[5034]: E0105 21:52:28.838168 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.838183 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:28 crc kubenswrapper[5034]: E0105 21:52:28.838348 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:28 crc kubenswrapper[5034]: E0105 21:52:28.838392 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:28 crc kubenswrapper[5034]: E0105 21:52:28.838454 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.865029 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.865112 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.865123 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.865166 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.865177 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:28Z","lastTransitionTime":"2026-01-05T21:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.966996 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.967044 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.967054 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.967069 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:28 crc kubenswrapper[5034]: I0105 21:52:28.967095 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:28Z","lastTransitionTime":"2026-01-05T21:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.069139 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.069183 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.069196 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.069213 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.069224 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:29Z","lastTransitionTime":"2026-01-05T21:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.159780 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:29 crc kubenswrapper[5034]: E0105 21:52:29.159992 5034 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:29 crc kubenswrapper[5034]: E0105 21:52:29.160072 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs podName:7949c792-bd35-4fb3-9235-402a13c61026 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:37.160053913 +0000 UTC m=+49.532053352 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs") pod "network-metrics-daemon-99zr4" (UID: "7949c792-bd35-4fb3-9235-402a13c61026") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.172067 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.172116 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.172125 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.172139 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.172148 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:29Z","lastTransitionTime":"2026-01-05T21:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.274461 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.274494 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.274505 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.274520 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.274531 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:29Z","lastTransitionTime":"2026-01-05T21:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.376167 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.376215 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.376225 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.376240 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.376249 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:29Z","lastTransitionTime":"2026-01-05T21:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.477922 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.477962 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.477972 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.477986 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.477998 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:29Z","lastTransitionTime":"2026-01-05T21:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.579887 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.579925 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.579933 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.579946 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.579955 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:29Z","lastTransitionTime":"2026-01-05T21:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.681986 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.682028 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.682041 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.682055 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.682064 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:29Z","lastTransitionTime":"2026-01-05T21:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.784205 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.784238 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.784252 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.784268 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.784279 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:29Z","lastTransitionTime":"2026-01-05T21:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.886783 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.886818 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.886829 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.886844 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.886854 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:29Z","lastTransitionTime":"2026-01-05T21:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.989119 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.989164 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.989177 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.989195 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:29 crc kubenswrapper[5034]: I0105 21:52:29.989207 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:29Z","lastTransitionTime":"2026-01-05T21:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.091088 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.091127 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.091135 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.091149 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.091158 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:30Z","lastTransitionTime":"2026-01-05T21:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.193662 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.193702 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.193710 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.193724 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.193733 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:30Z","lastTransitionTime":"2026-01-05T21:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.295880 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.295939 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.295954 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.295967 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.295976 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:30Z","lastTransitionTime":"2026-01-05T21:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.398341 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.398400 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.398412 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.398428 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.398442 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:30Z","lastTransitionTime":"2026-01-05T21:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.500643 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.500681 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.500692 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.500707 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.500714 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:30Z","lastTransitionTime":"2026-01-05T21:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.602570 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.602614 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.602625 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.602639 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.602650 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:30Z","lastTransitionTime":"2026-01-05T21:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.705281 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.705325 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.705336 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.705355 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.705367 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:30Z","lastTransitionTime":"2026-01-05T21:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.807683 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.807741 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.807750 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.807766 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.807776 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:30Z","lastTransitionTime":"2026-01-05T21:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.837602 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.837685 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.837709 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.837666 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:30 crc kubenswrapper[5034]: E0105 21:52:30.837806 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:30 crc kubenswrapper[5034]: E0105 21:52:30.837910 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:30 crc kubenswrapper[5034]: E0105 21:52:30.837984 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:30 crc kubenswrapper[5034]: E0105 21:52:30.838027 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.909962 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.910013 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.910025 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.910042 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:30 crc kubenswrapper[5034]: I0105 21:52:30.910054 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:30Z","lastTransitionTime":"2026-01-05T21:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.012023 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.012049 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.012057 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.012070 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.012092 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.093072 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.093868 5034 scope.go:117] "RemoveContainer" containerID="f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.114088 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.114128 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.114139 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.114153 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.114161 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.216918 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.216991 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.217008 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.217028 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.217044 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.256788 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.256867 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.256880 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.256897 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.256909 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: E0105 21:52:31.268217 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.274409 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.274460 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.274471 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.274487 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.274505 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: E0105 21:52:31.285297 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.288030 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.288066 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.288089 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.288107 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.288118 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: E0105 21:52:31.298235 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.302351 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.302394 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.302403 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.302424 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.302452 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: E0105 21:52:31.312702 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.316247 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.316280 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.316290 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.316303 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.316312 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: E0105 21:52:31.330985 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:31 crc kubenswrapper[5034]: E0105 21:52:31.331128 5034 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.332578 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.332611 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.332619 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.332633 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.332642 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.434371 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.434405 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.434413 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.434426 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.434436 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.536656 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.536701 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.536710 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.536725 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.536733 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.639056 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.639111 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.639121 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.639137 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.639146 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.741627 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.741660 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.741668 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.741684 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.741693 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.843055 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.843108 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.843117 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.843129 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.843138 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.945502 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.945543 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.945552 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.945567 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:31 crc kubenswrapper[5034]: I0105 21:52:31.945576 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:31Z","lastTransitionTime":"2026-01-05T21:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.048532 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.048580 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.048595 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.048615 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.048632 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:32Z","lastTransitionTime":"2026-01-05T21:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.103684 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/2.log" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.104180 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/1.log" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.106594 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a" exitCode=1 Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.106628 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.106658 5034 scope.go:117] "RemoveContainer" containerID="f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.110261 5034 scope.go:117] "RemoveContainer" containerID="9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a" Jan 05 21:52:32 crc kubenswrapper[5034]: E0105 21:52:32.110493 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.121299 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.136165 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.145336 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.150174 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.150213 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.150224 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.150240 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.150250 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:32Z","lastTransitionTime":"2026-01-05T21:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.173618 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.186862 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.199916 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.212052 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.226144 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.236580 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.246760 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.252761 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.252805 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.252814 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.252829 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.252841 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:32Z","lastTransitionTime":"2026-01-05T21:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.262984 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"ift-kube-apiserver/kube-apiserver-crc\\\\nI0105 21:52:18.899068 6455 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-95tx4 after 0 failed attempt(s)\\\\nI0105 21:52:18.899134 6455 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-95tx4\\\\nI0105 21:52:18.899144 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:18.899146 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0105 21:52:18.899142 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"7-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794114 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0105 21:52:31.794130 6674 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0105 21:52:31.794228 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0105 21:52:31.794234 6674 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:31.794027 6674 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794192 6674 services_controller.go:356] Processing sync for service openshift-kube-apiserver/apiserver for network=default\\\\nF0105 21:52:31.794272 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.271069 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.279589 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.288513 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.297059 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.307205 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.315028 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.354516 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.354552 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.354560 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.354573 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.354582 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:32Z","lastTransitionTime":"2026-01-05T21:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.456621 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.456657 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.456665 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.456678 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.456686 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:32Z","lastTransitionTime":"2026-01-05T21:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.559218 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.559260 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.559285 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.559297 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.559305 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:32Z","lastTransitionTime":"2026-01-05T21:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.661669 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.661738 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.661750 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.661763 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.661772 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:32Z","lastTransitionTime":"2026-01-05T21:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.764232 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.764274 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.764285 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.764303 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.764315 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:32Z","lastTransitionTime":"2026-01-05T21:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.838134 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.838210 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:32 crc kubenswrapper[5034]: E0105 21:52:32.838245 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.838274 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:32 crc kubenswrapper[5034]: E0105 21:52:32.838352 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.838215 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:32 crc kubenswrapper[5034]: E0105 21:52:32.838421 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:32 crc kubenswrapper[5034]: E0105 21:52:32.838482 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.866646 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.866687 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.866701 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.866718 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.866731 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:32Z","lastTransitionTime":"2026-01-05T21:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.969470 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.969541 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.969565 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.969593 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:32 crc kubenswrapper[5034]: I0105 21:52:32.969614 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:32Z","lastTransitionTime":"2026-01-05T21:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.071738 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.071801 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.071824 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.071849 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.071868 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:33Z","lastTransitionTime":"2026-01-05T21:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.111922 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/2.log" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.174731 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.174759 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.174770 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.174785 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.174797 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:33Z","lastTransitionTime":"2026-01-05T21:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.277344 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.277391 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.277420 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.277442 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.277454 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:33Z","lastTransitionTime":"2026-01-05T21:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.380333 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.380383 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.380394 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.380411 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.380419 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:33Z","lastTransitionTime":"2026-01-05T21:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.483319 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.483382 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.483399 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.483423 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.483439 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:33Z","lastTransitionTime":"2026-01-05T21:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.586295 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.586337 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.586350 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.586369 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.586384 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:33Z","lastTransitionTime":"2026-01-05T21:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.688709 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.688788 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.688805 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.688823 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.688834 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:33Z","lastTransitionTime":"2026-01-05T21:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.792502 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.792551 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.792573 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.792596 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.792657 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:33Z","lastTransitionTime":"2026-01-05T21:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.894610 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.894652 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.894665 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.894679 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.894690 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:33Z","lastTransitionTime":"2026-01-05T21:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.997001 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.997122 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.997149 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.997179 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:33 crc kubenswrapper[5034]: I0105 21:52:33.997204 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:33Z","lastTransitionTime":"2026-01-05T21:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.099867 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.099909 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.099921 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.099937 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.099948 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:34Z","lastTransitionTime":"2026-01-05T21:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.202176 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.202217 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.202225 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.202238 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.202248 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:34Z","lastTransitionTime":"2026-01-05T21:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.305388 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.305455 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.305478 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.305507 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.305527 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:34Z","lastTransitionTime":"2026-01-05T21:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.407790 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.407838 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.407850 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.407863 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.407874 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:34Z","lastTransitionTime":"2026-01-05T21:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.510181 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.510234 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.510245 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.510263 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.510277 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:34Z","lastTransitionTime":"2026-01-05T21:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.613145 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.613193 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.613208 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.613238 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.613252 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:34Z","lastTransitionTime":"2026-01-05T21:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.715602 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.715646 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.715660 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.715677 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.715691 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:34Z","lastTransitionTime":"2026-01-05T21:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.817647 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.817689 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.817700 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.817717 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.817729 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:34Z","lastTransitionTime":"2026-01-05T21:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.838048 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.838118 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.838056 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:34 crc kubenswrapper[5034]: E0105 21:52:34.838195 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.838229 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:34 crc kubenswrapper[5034]: E0105 21:52:34.838349 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:34 crc kubenswrapper[5034]: E0105 21:52:34.838429 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:34 crc kubenswrapper[5034]: E0105 21:52:34.838498 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.919941 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.919978 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.919991 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.920007 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:34 crc kubenswrapper[5034]: I0105 21:52:34.920019 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:34Z","lastTransitionTime":"2026-01-05T21:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.022046 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.022109 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.022121 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.022138 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.022148 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:35Z","lastTransitionTime":"2026-01-05T21:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.124446 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.124574 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.124584 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.124597 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.124606 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:35Z","lastTransitionTime":"2026-01-05T21:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.227293 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.227342 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.227350 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.227364 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.227375 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:35Z","lastTransitionTime":"2026-01-05T21:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.329679 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.329707 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.329715 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.329727 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.329737 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:35Z","lastTransitionTime":"2026-01-05T21:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.432681 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.432756 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.432767 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.432781 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.432789 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:35Z","lastTransitionTime":"2026-01-05T21:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.536834 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.536882 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.536896 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.536916 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.536928 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:35Z","lastTransitionTime":"2026-01-05T21:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.639622 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.639694 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.639717 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.639748 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.639770 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:35Z","lastTransitionTime":"2026-01-05T21:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.742562 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.742630 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.742653 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.742682 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.742703 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:35Z","lastTransitionTime":"2026-01-05T21:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.845217 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.845253 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.845263 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.845276 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.845285 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:35Z","lastTransitionTime":"2026-01-05T21:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.947662 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.947698 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.947706 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.947718 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:35 crc kubenswrapper[5034]: I0105 21:52:35.947727 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:35Z","lastTransitionTime":"2026-01-05T21:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.050434 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.050478 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.050490 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.050508 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.050521 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:36Z","lastTransitionTime":"2026-01-05T21:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.152557 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.152593 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.152601 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.152615 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.152625 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:36Z","lastTransitionTime":"2026-01-05T21:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.255210 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.255281 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.255303 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.255332 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.255353 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:36Z","lastTransitionTime":"2026-01-05T21:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.357757 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.357995 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.358057 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.358179 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.358247 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:36Z","lastTransitionTime":"2026-01-05T21:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.461552 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.461861 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.461955 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.462064 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.462193 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:36Z","lastTransitionTime":"2026-01-05T21:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.565029 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.565068 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.565107 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.565123 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.565135 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:36Z","lastTransitionTime":"2026-01-05T21:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.667653 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.667689 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.667702 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.667717 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.667727 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:36Z","lastTransitionTime":"2026-01-05T21:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.770631 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.770686 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.770709 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.770731 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.770744 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:36Z","lastTransitionTime":"2026-01-05T21:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.837318 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.837362 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.837467 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:36 crc kubenswrapper[5034]: E0105 21:52:36.837472 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.837509 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:36 crc kubenswrapper[5034]: E0105 21:52:36.837607 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:36 crc kubenswrapper[5034]: E0105 21:52:36.837686 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:36 crc kubenswrapper[5034]: E0105 21:52:36.837874 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.875882 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.875926 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.875944 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.875960 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.875971 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:36Z","lastTransitionTime":"2026-01-05T21:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.978369 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.978424 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.978433 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.978447 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:36 crc kubenswrapper[5034]: I0105 21:52:36.978458 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:36Z","lastTransitionTime":"2026-01-05T21:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.081907 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.081964 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.081983 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.082008 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.082040 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:37Z","lastTransitionTime":"2026-01-05T21:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.184570 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.184614 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.184630 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.184646 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.184657 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:37Z","lastTransitionTime":"2026-01-05T21:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.241794 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:37 crc kubenswrapper[5034]: E0105 21:52:37.241962 5034 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:37 crc kubenswrapper[5034]: E0105 21:52:37.242057 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs podName:7949c792-bd35-4fb3-9235-402a13c61026 nodeName:}" failed. No retries permitted until 2026-01-05 21:52:53.24203441 +0000 UTC m=+65.614033859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs") pod "network-metrics-daemon-99zr4" (UID: "7949c792-bd35-4fb3-9235-402a13c61026") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.287133 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.287173 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.287184 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.287199 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.287210 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:37Z","lastTransitionTime":"2026-01-05T21:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.390023 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.390061 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.390088 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.390105 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.390116 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:37Z","lastTransitionTime":"2026-01-05T21:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.491876 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.492155 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.492285 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.492396 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.492481 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:37Z","lastTransitionTime":"2026-01-05T21:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.594314 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.594346 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.594354 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.594368 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.594376 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:37Z","lastTransitionTime":"2026-01-05T21:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.696303 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.696338 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.696348 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.696363 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.696375 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:37Z","lastTransitionTime":"2026-01-05T21:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.798035 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.798282 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.798347 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.798411 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.798465 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:37Z","lastTransitionTime":"2026-01-05T21:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.850885 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.867831 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.878399 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.898028 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.901180 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.901238 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.901253 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.901268 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.901278 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:37Z","lastTransitionTime":"2026-01-05T21:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.913461 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.927202 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.938645 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.949844 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.963290 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:37 crc kubenswrapper[5034]: I0105 21:52:37.979751 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.003414 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.003447 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.003455 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.003468 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.003478 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:38Z","lastTransitionTime":"2026-01-05T21:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.007458 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"ift-kube-apiserver/kube-apiserver-crc\\\\nI0105 21:52:18.899068 6455 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-95tx4 after 0 failed attempt(s)\\\\nI0105 21:52:18.899134 6455 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-95tx4\\\\nI0105 21:52:18.899144 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:18.899146 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0105 21:52:18.899142 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"7-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794114 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0105 21:52:31.794130 6674 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0105 21:52:31.794228 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0105 21:52:31.794234 6674 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:31.794027 6674 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794192 6674 services_controller.go:356] Processing sync for service openshift-kube-apiserver/apiserver for network=default\\\\nF0105 21:52:31.794272 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.017711 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.028981 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.043829 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.053734 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.063480 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.066888 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.071856 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.078551 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.089322 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.104562 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.105335 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.105393 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.105407 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.105424 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.105438 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:38Z","lastTransitionTime":"2026-01-05T21:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.117023 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.129319 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.150327 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e71612ae9e1bdd10cc3a64161896c871fb85fa255de819beda30449eaa4759\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"message\\\":\\\"ift-kube-apiserver/kube-apiserver-crc\\\\nI0105 21:52:18.899068 6455 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-95tx4 after 0 failed attempt(s)\\\\nI0105 21:52:18.899134 6455 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-95tx4\\\\nI0105 21:52:18.899144 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:18.899146 6455 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0105 21:52:18.899142 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"7-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794114 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0105 21:52:31.794130 6674 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0105 21:52:31.794228 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0105 21:52:31.794234 6674 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:31.794027 6674 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794192 6674 services_controller.go:356] Processing sync for service openshift-kube-apiserver/apiserver for network=default\\\\nF0105 21:52:31.794272 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.160362 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.170225 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.181892 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.194767 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.204354 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.207166 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.207192 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.207201 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.207214 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.207224 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:38Z","lastTransitionTime":"2026-01-05T21:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.216367 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02963c8-9ebf-4538-8cd7-003e496d1882\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.228743 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.239439 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.247495 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.263423 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.273884 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.283901 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.294338 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:38Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.308628 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.308668 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.308680 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.308695 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.308707 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:38Z","lastTransitionTime":"2026-01-05T21:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.411382 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.411423 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.411434 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.411449 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.411460 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:38Z","lastTransitionTime":"2026-01-05T21:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.513749 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.513782 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.513791 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.513808 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.513820 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:38Z","lastTransitionTime":"2026-01-05T21:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.615261 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.615561 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.615578 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.615598 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.615615 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:38Z","lastTransitionTime":"2026-01-05T21:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.656634 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.656734 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:53:10.656713797 +0000 UTC m=+83.028713246 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.718424 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.718488 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.718502 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.718516 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.718526 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:38Z","lastTransitionTime":"2026-01-05T21:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.757398 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.757434 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.757456 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757478 5034 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757542 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:53:10.757528554 +0000 UTC m=+83.129527993 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757561 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757576 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757588 5034 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.757482 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757621 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:53:10.757610716 +0000 UTC m=+83.129610155 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757653 5034 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757666 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757766 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:53:10.75774649 +0000 UTC m=+83.129745989 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757773 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757787 5034 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.757825 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:53:10.757811832 +0000 UTC m=+83.129811331 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.821039 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.821105 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.821119 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.821134 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.821145 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:38Z","lastTransitionTime":"2026-01-05T21:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.838306 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.838343 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.838322 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.838305 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.838452 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.838492 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.838523 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:38 crc kubenswrapper[5034]: E0105 21:52:38.838549 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.923850 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.923884 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.923895 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.923912 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:38 crc kubenswrapper[5034]: I0105 21:52:38.923925 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:38Z","lastTransitionTime":"2026-01-05T21:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.026519 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.026580 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.026600 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.026618 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.026630 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:39Z","lastTransitionTime":"2026-01-05T21:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.128809 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.128840 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.128848 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.128861 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.128870 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:39Z","lastTransitionTime":"2026-01-05T21:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.234273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.234330 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.234340 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.234357 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.234372 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:39Z","lastTransitionTime":"2026-01-05T21:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.336361 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.336396 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.336403 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.336417 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.336425 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:39Z","lastTransitionTime":"2026-01-05T21:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.438895 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.438979 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.438998 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.439026 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.439046 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:39Z","lastTransitionTime":"2026-01-05T21:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.540941 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.540982 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.540990 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.541005 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.541014 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:39Z","lastTransitionTime":"2026-01-05T21:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.643181 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.643220 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.643230 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.643246 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.643255 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:39Z","lastTransitionTime":"2026-01-05T21:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.745646 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.745684 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.745697 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.745710 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.745719 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:39Z","lastTransitionTime":"2026-01-05T21:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.847625 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.847661 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.847670 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.847684 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.847692 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:39Z","lastTransitionTime":"2026-01-05T21:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.949801 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.949833 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.949841 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.949854 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:39 crc kubenswrapper[5034]: I0105 21:52:39.949863 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:39Z","lastTransitionTime":"2026-01-05T21:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.052320 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.052355 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.052365 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.052402 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.052414 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:40Z","lastTransitionTime":"2026-01-05T21:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.155222 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.155301 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.155315 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.155335 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.155347 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:40Z","lastTransitionTime":"2026-01-05T21:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.257762 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.257812 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.257828 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.257848 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.257863 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:40Z","lastTransitionTime":"2026-01-05T21:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.360571 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.360652 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.360665 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.360681 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.360691 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:40Z","lastTransitionTime":"2026-01-05T21:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.463385 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.463445 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.463465 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.463485 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.463499 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:40Z","lastTransitionTime":"2026-01-05T21:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.566179 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.566229 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.566245 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.566264 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.566278 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:40Z","lastTransitionTime":"2026-01-05T21:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.668671 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.668708 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.668719 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.668755 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.668768 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:40Z","lastTransitionTime":"2026-01-05T21:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.771269 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.771334 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.771353 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.771374 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.771389 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:40Z","lastTransitionTime":"2026-01-05T21:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.838392 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.838425 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.838458 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.838425 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:40 crc kubenswrapper[5034]: E0105 21:52:40.838521 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:40 crc kubenswrapper[5034]: E0105 21:52:40.838630 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:40 crc kubenswrapper[5034]: E0105 21:52:40.838702 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:40 crc kubenswrapper[5034]: E0105 21:52:40.838773 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.873735 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.873911 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.873925 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.873940 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.873950 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:40Z","lastTransitionTime":"2026-01-05T21:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.976067 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.976146 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.976158 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.976180 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:40 crc kubenswrapper[5034]: I0105 21:52:40.976194 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:40Z","lastTransitionTime":"2026-01-05T21:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.077929 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.077975 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.077986 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.077999 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.078007 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.180044 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.180188 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.180208 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.180243 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.180264 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.282592 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.282631 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.282642 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.282660 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.282672 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.347483 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.347520 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.347528 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.347543 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.347556 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: E0105 21:52:41.358829 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.366769 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.366814 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.366826 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.366845 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.366857 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: E0105 21:52:41.378453 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.382456 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.382498 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.382512 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.382528 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.382542 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: E0105 21:52:41.394523 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.399462 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.399509 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.399525 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.399547 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.399559 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: E0105 21:52:41.414345 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.419018 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.419116 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.419136 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.419167 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.419186 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: E0105 21:52:41.436712 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:41 crc kubenswrapper[5034]: E0105 21:52:41.436946 5034 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.439552 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.439623 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.439641 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.439673 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.439693 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.543193 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.543284 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.543322 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.543358 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.543382 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.646500 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.646543 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.646552 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.646567 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.646575 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.749401 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.749462 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.749473 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.749487 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.749497 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.852500 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.852539 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.852551 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.852570 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.852583 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.955682 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.955714 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.955722 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.955735 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:41 crc kubenswrapper[5034]: I0105 21:52:41.955744 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:41Z","lastTransitionTime":"2026-01-05T21:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.059498 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.059540 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.059551 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.059570 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.059582 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:42Z","lastTransitionTime":"2026-01-05T21:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.162317 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.162626 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.162713 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.162798 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.162883 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:42Z","lastTransitionTime":"2026-01-05T21:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.265246 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.265548 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.265650 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.265751 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.265833 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:42Z","lastTransitionTime":"2026-01-05T21:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.368520 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.368565 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.368575 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.368593 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.368604 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:42Z","lastTransitionTime":"2026-01-05T21:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.471162 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.471206 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.471217 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.471234 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.471246 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:42Z","lastTransitionTime":"2026-01-05T21:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.574405 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.574530 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.574559 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.574600 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.574626 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:42Z","lastTransitionTime":"2026-01-05T21:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.678031 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.678148 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.678174 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.678201 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.678221 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:42Z","lastTransitionTime":"2026-01-05T21:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.781975 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.782057 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.782069 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.782104 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.782117 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:42Z","lastTransitionTime":"2026-01-05T21:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.837939 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.838003 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.837973 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.838219 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:42 crc kubenswrapper[5034]: E0105 21:52:42.838341 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:42 crc kubenswrapper[5034]: E0105 21:52:42.838575 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:42 crc kubenswrapper[5034]: E0105 21:52:42.838692 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:42 crc kubenswrapper[5034]: E0105 21:52:42.838805 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.885680 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.885755 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.885773 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.885805 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.885826 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:42Z","lastTransitionTime":"2026-01-05T21:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.989187 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.989263 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.989277 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.989294 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:42 crc kubenswrapper[5034]: I0105 21:52:42.989304 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:42Z","lastTransitionTime":"2026-01-05T21:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.093419 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.093485 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.093504 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.093531 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.093553 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:43Z","lastTransitionTime":"2026-01-05T21:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.196668 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.196795 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.196868 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.196911 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.196935 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:43Z","lastTransitionTime":"2026-01-05T21:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.301859 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.301922 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.301941 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.301965 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.302031 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:43Z","lastTransitionTime":"2026-01-05T21:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.406427 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.406471 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.406483 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.406497 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.406508 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:43Z","lastTransitionTime":"2026-01-05T21:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.508928 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.508967 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.508984 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.509007 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.509022 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:43Z","lastTransitionTime":"2026-01-05T21:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.611682 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.611720 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.611731 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.611743 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.611754 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:43Z","lastTransitionTime":"2026-01-05T21:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.714419 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.714487 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.714505 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.714533 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.714552 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:43Z","lastTransitionTime":"2026-01-05T21:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.817014 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.817162 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.817190 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.817233 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.817261 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:43Z","lastTransitionTime":"2026-01-05T21:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.839018 5034 scope.go:117] "RemoveContainer" containerID="9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a" Jan 05 21:52:43 crc kubenswrapper[5034]: E0105 21:52:43.839389 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.851049 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.868686 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.885717 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.903301 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.921612 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.921663 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.921675 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.921694 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.921708 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:43Z","lastTransitionTime":"2026-01-05T21:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.921567 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.936039 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.947259 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.959750 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02963c8-9ebf-4538-8cd7-003e496d1882\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.974471 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:43 crc kubenswrapper[5034]: I0105 21:52:43.987101 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:43.999992 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.021434 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:44Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.024756 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.024856 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.024882 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.024915 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.024933 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:44Z","lastTransitionTime":"2026-01-05T21:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.038359 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:44Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.057513 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:44Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.075811 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"7-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794114 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0105 21:52:31.794130 6674 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0105 21:52:31.794228 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0105 21:52:31.794234 6674 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:31.794027 6674 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794192 6674 services_controller.go:356] Processing sync for service openshift-kube-apiserver/apiserver for network=default\\\\nF0105 21:52:31.794272 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:44Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.086043 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:44Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.096503 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:44Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.107831 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:44Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.128546 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.128613 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.128626 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.128650 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.128676 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:44Z","lastTransitionTime":"2026-01-05T21:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.230727 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.230764 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.230775 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.230788 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.230797 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:44Z","lastTransitionTime":"2026-01-05T21:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.333802 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.333877 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.333899 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.333926 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.333945 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:44Z","lastTransitionTime":"2026-01-05T21:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.436528 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.436588 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.436601 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.436624 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.436642 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:44Z","lastTransitionTime":"2026-01-05T21:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.538492 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.538547 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.538556 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.538573 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.538583 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:44Z","lastTransitionTime":"2026-01-05T21:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.641445 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.641515 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.641532 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.641559 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.641578 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:44Z","lastTransitionTime":"2026-01-05T21:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.744612 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.744677 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.744687 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.744704 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.744715 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:44Z","lastTransitionTime":"2026-01-05T21:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.837373 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:44 crc kubenswrapper[5034]: E0105 21:52:44.837486 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.837482 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.837529 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:44 crc kubenswrapper[5034]: E0105 21:52:44.837574 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.837540 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:44 crc kubenswrapper[5034]: E0105 21:52:44.837750 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:44 crc kubenswrapper[5034]: E0105 21:52:44.837805 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.846689 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.846722 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.846731 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.846744 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.846754 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:44Z","lastTransitionTime":"2026-01-05T21:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.949488 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.949554 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.949573 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.949598 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:44 crc kubenswrapper[5034]: I0105 21:52:44.949617 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:44Z","lastTransitionTime":"2026-01-05T21:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.052967 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.053019 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.053033 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.053046 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.053055 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:45Z","lastTransitionTime":"2026-01-05T21:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.154491 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.154526 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.154535 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.154551 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.154562 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:45Z","lastTransitionTime":"2026-01-05T21:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.257400 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.257438 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.257446 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.257557 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.257574 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:45Z","lastTransitionTime":"2026-01-05T21:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.360358 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.360395 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.360406 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.360421 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.360431 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:45Z","lastTransitionTime":"2026-01-05T21:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.462627 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.462686 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.462702 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.462724 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.462741 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:45Z","lastTransitionTime":"2026-01-05T21:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.564873 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.564915 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.564928 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.564968 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.564981 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:45Z","lastTransitionTime":"2026-01-05T21:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.666802 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.666843 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.666851 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.666864 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.666872 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:45Z","lastTransitionTime":"2026-01-05T21:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.768801 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.768837 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.768845 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.768858 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.768866 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:45Z","lastTransitionTime":"2026-01-05T21:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.871329 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.871373 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.871386 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.871402 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.871410 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:45Z","lastTransitionTime":"2026-01-05T21:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.973415 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.973455 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.973465 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.973478 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:45 crc kubenswrapper[5034]: I0105 21:52:45.973489 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:45Z","lastTransitionTime":"2026-01-05T21:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.075523 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.075570 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.075578 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.075593 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.075601 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:46Z","lastTransitionTime":"2026-01-05T21:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.177574 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.177618 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.177629 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.177645 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.177656 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:46Z","lastTransitionTime":"2026-01-05T21:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.280182 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.280216 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.280225 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.280238 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.280246 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:46Z","lastTransitionTime":"2026-01-05T21:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.382027 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.382059 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.382068 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.382103 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.382115 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:46Z","lastTransitionTime":"2026-01-05T21:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.484108 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.484141 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.484159 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.484173 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.484183 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:46Z","lastTransitionTime":"2026-01-05T21:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.589882 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.589914 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.589924 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.589940 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.589951 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:46Z","lastTransitionTime":"2026-01-05T21:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.691351 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.691385 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.691394 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.691406 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.691414 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:46Z","lastTransitionTime":"2026-01-05T21:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.793876 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.793908 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.793916 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.793952 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.793961 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:46Z","lastTransitionTime":"2026-01-05T21:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.838354 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.838399 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.838365 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.838362 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:46 crc kubenswrapper[5034]: E0105 21:52:46.838477 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:46 crc kubenswrapper[5034]: E0105 21:52:46.838564 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:46 crc kubenswrapper[5034]: E0105 21:52:46.838635 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:46 crc kubenswrapper[5034]: E0105 21:52:46.838701 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.895331 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.895363 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.895409 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.895429 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.895438 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:46Z","lastTransitionTime":"2026-01-05T21:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.998046 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.998097 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.998106 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.998119 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:46 crc kubenswrapper[5034]: I0105 21:52:46.998127 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:46Z","lastTransitionTime":"2026-01-05T21:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.103532 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.103630 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.103640 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.103658 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.103668 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:47Z","lastTransitionTime":"2026-01-05T21:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.206360 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.206398 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.206410 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.206424 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.206434 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:47Z","lastTransitionTime":"2026-01-05T21:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.308899 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.308934 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.308945 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.308960 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.308968 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:47Z","lastTransitionTime":"2026-01-05T21:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.410926 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.410984 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.411001 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.411022 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.411038 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:47Z","lastTransitionTime":"2026-01-05T21:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.513273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.513341 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.513355 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.513370 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.513381 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:47Z","lastTransitionTime":"2026-01-05T21:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.615402 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.615433 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.615442 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.615456 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.615468 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:47Z","lastTransitionTime":"2026-01-05T21:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.718131 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.718164 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.718175 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.718189 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.718199 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:47Z","lastTransitionTime":"2026-01-05T21:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.819806 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.819842 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.819854 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.819869 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.819878 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:47Z","lastTransitionTime":"2026-01-05T21:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.851934 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.864541 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.876849 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.888852 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.898066 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02963c8-9ebf-4538-8cd7-003e496d1882\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.911128 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.921020 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.922359 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.922400 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.922410 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.922424 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.922435 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:47Z","lastTransitionTime":"2026-01-05T21:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.932964 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.955825 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.969618 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.981039 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:47 crc kubenswrapper[5034]: I0105 21:52:47.991645 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:47Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.004379 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:48Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.015147 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:48Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.024843 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.024892 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.024904 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.024919 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.024933 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:48Z","lastTransitionTime":"2026-01-05T21:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.028119 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:48Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.048380 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"7-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794114 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0105 21:52:31.794130 6674 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0105 21:52:31.794228 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0105 21:52:31.794234 6674 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:31.794027 6674 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794192 6674 services_controller.go:356] Processing sync for service openshift-kube-apiserver/apiserver for network=default\\\\nF0105 21:52:31.794272 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:48Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.058301 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:48Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.068027 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:48Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.127600 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.127651 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.127668 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.127690 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.127705 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:48Z","lastTransitionTime":"2026-01-05T21:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.229957 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.230006 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.230017 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.230034 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.230047 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:48Z","lastTransitionTime":"2026-01-05T21:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.332181 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.332499 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.332697 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.332814 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.332955 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:48Z","lastTransitionTime":"2026-01-05T21:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.435737 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.435774 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.435782 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.435795 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.435805 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:48Z","lastTransitionTime":"2026-01-05T21:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.537961 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.538000 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.538010 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.538023 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.538034 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:48Z","lastTransitionTime":"2026-01-05T21:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.640201 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.640237 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.640245 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.640259 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.640268 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:48Z","lastTransitionTime":"2026-01-05T21:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.745763 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.745811 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.746097 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.746112 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.746123 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:48Z","lastTransitionTime":"2026-01-05T21:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.838387 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.838418 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:48 crc kubenswrapper[5034]: E0105 21:52:48.838512 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.838528 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:48 crc kubenswrapper[5034]: E0105 21:52:48.838636 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:48 crc kubenswrapper[5034]: E0105 21:52:48.838722 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.838813 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:48 crc kubenswrapper[5034]: E0105 21:52:48.838967 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.848152 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.848254 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.848315 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.848378 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.848440 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:48Z","lastTransitionTime":"2026-01-05T21:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.951430 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.951519 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.951533 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.951555 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:48 crc kubenswrapper[5034]: I0105 21:52:48.951567 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:48Z","lastTransitionTime":"2026-01-05T21:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.053966 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.054276 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.054344 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.054418 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.054489 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:49Z","lastTransitionTime":"2026-01-05T21:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.156037 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.156317 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.156409 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.156515 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.156595 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:49Z","lastTransitionTime":"2026-01-05T21:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.259133 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.259403 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.259486 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.259558 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.259626 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:49Z","lastTransitionTime":"2026-01-05T21:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.362351 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.362402 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.362415 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.362432 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.362444 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:49Z","lastTransitionTime":"2026-01-05T21:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.464413 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.464454 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.464468 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.464484 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.464497 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:49Z","lastTransitionTime":"2026-01-05T21:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.566599 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.566633 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.566643 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.566683 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.566695 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:49Z","lastTransitionTime":"2026-01-05T21:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.668687 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.668955 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.669052 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.669171 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.669261 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:49Z","lastTransitionTime":"2026-01-05T21:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.771655 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.771704 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.771713 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.771725 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.771734 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:49Z","lastTransitionTime":"2026-01-05T21:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.873869 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.873934 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.873944 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.873956 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.873966 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:49Z","lastTransitionTime":"2026-01-05T21:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.975934 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.976243 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.976332 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.976398 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:49 crc kubenswrapper[5034]: I0105 21:52:49.976465 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:49Z","lastTransitionTime":"2026-01-05T21:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.078274 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.078508 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.078584 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.078660 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.078726 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:50Z","lastTransitionTime":"2026-01-05T21:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.181220 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.181271 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.181282 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.181297 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.181315 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:50Z","lastTransitionTime":"2026-01-05T21:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.283751 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.283776 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.283783 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.283798 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.283808 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:50Z","lastTransitionTime":"2026-01-05T21:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.386424 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.386464 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.386473 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.386487 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.386496 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:50Z","lastTransitionTime":"2026-01-05T21:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.488956 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.489014 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.489034 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.489055 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.489103 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:50Z","lastTransitionTime":"2026-01-05T21:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.591190 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.591743 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.591837 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.591912 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.591973 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:50Z","lastTransitionTime":"2026-01-05T21:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.694297 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.694347 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.694357 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.694370 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.694379 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:50Z","lastTransitionTime":"2026-01-05T21:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.796328 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.796364 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.796373 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.796387 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.796397 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:50Z","lastTransitionTime":"2026-01-05T21:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.837480 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.837518 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.837522 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:50 crc kubenswrapper[5034]: E0105 21:52:50.837624 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:50 crc kubenswrapper[5034]: E0105 21:52:50.837776 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:50 crc kubenswrapper[5034]: E0105 21:52:50.837888 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.838263 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:50 crc kubenswrapper[5034]: E0105 21:52:50.838472 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.898105 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.898141 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.898153 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.898167 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:50 crc kubenswrapper[5034]: I0105 21:52:50.898178 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:50Z","lastTransitionTime":"2026-01-05T21:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.000456 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.000495 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.000505 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.000521 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.000588 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.102274 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.102319 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.102332 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.102374 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.102393 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.204248 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.204297 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.204310 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.204327 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.204339 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.305885 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.305922 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.305930 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.305942 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.305952 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.408246 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.408281 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.408290 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.408303 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.408314 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.510283 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.510320 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.510329 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.510345 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.510353 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.562663 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.562710 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.562720 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.562734 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.562747 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: E0105 21:52:51.574984 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.578152 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.578178 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.578186 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.578197 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.578206 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: E0105 21:52:51.588558 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.591324 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.591351 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.591362 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.591375 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.591384 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: E0105 21:52:51.601692 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.604629 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.604647 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.604655 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.604669 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.604679 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: E0105 21:52:51.614494 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.617132 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.617154 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.617163 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.617174 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.617182 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: E0105 21:52:51.628000 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:51 crc kubenswrapper[5034]: E0105 21:52:51.628127 5034 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.629502 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.629528 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.629536 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.629566 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.629575 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.732143 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.732179 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.732189 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.732204 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.732213 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.833991 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.834303 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.834402 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.834493 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.834565 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.936283 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.936319 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.936328 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.936341 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:51 crc kubenswrapper[5034]: I0105 21:52:51.936350 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:51Z","lastTransitionTime":"2026-01-05T21:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.038772 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.038818 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.038826 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.038843 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.038853 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:52Z","lastTransitionTime":"2026-01-05T21:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.141229 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.141265 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.141273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.141285 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.141294 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:52Z","lastTransitionTime":"2026-01-05T21:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.243677 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.243708 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.243716 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.243728 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.243736 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:52Z","lastTransitionTime":"2026-01-05T21:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.345974 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.346008 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.346017 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.346032 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.346041 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:52Z","lastTransitionTime":"2026-01-05T21:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.448405 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.448459 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.448470 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.448485 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.448496 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:52Z","lastTransitionTime":"2026-01-05T21:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.550728 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.550761 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.550770 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.550782 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.550792 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:52Z","lastTransitionTime":"2026-01-05T21:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.653375 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.653415 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.653426 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.653440 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.653448 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:52Z","lastTransitionTime":"2026-01-05T21:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.755276 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.755311 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.755319 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.755333 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.755342 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:52Z","lastTransitionTime":"2026-01-05T21:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.837342 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.837343 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.837364 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.837473 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:52 crc kubenswrapper[5034]: E0105 21:52:52.837584 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:52 crc kubenswrapper[5034]: E0105 21:52:52.837680 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:52 crc kubenswrapper[5034]: E0105 21:52:52.837745 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:52 crc kubenswrapper[5034]: E0105 21:52:52.837825 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.857463 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.857508 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.857522 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.857541 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.857554 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:52Z","lastTransitionTime":"2026-01-05T21:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.959446 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.959506 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.959523 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.959547 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:52 crc kubenswrapper[5034]: I0105 21:52:52.959564 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:52Z","lastTransitionTime":"2026-01-05T21:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.061501 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.061531 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.061540 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.061553 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.061562 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:53Z","lastTransitionTime":"2026-01-05T21:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.163656 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.163702 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.163716 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.163733 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.163743 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:53Z","lastTransitionTime":"2026-01-05T21:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.265409 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.265449 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.265458 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.265473 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.265483 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:53Z","lastTransitionTime":"2026-01-05T21:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.305556 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:53 crc kubenswrapper[5034]: E0105 21:52:53.305807 5034 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:53 crc kubenswrapper[5034]: E0105 21:52:53.305891 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs podName:7949c792-bd35-4fb3-9235-402a13c61026 nodeName:}" failed. No retries permitted until 2026-01-05 21:53:25.305868756 +0000 UTC m=+97.677868225 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs") pod "network-metrics-daemon-99zr4" (UID: "7949c792-bd35-4fb3-9235-402a13c61026") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.368312 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.368357 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.368369 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.368386 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.368396 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:53Z","lastTransitionTime":"2026-01-05T21:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.470419 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.470452 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.470461 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.470491 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.470500 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:53Z","lastTransitionTime":"2026-01-05T21:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.572698 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.572735 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.572747 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.572759 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.572768 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:53Z","lastTransitionTime":"2026-01-05T21:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.675018 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.675060 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.675070 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.675102 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.675114 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:53Z","lastTransitionTime":"2026-01-05T21:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.777590 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.777632 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.777642 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.777657 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.777667 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:53Z","lastTransitionTime":"2026-01-05T21:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.838093 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:53 crc kubenswrapper[5034]: E0105 21:52:53.838232 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.880232 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.880274 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.880283 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.880298 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.880308 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:53Z","lastTransitionTime":"2026-01-05T21:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.981882 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.981914 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.981922 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.981936 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:53 crc kubenswrapper[5034]: I0105 21:52:53.981946 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:53Z","lastTransitionTime":"2026-01-05T21:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.084227 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.084268 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.084278 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.084293 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.084302 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:54Z","lastTransitionTime":"2026-01-05T21:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.174326 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tsch6_691cc76e-ed89-4547-9bb1-58b03c8f7932/kube-multus/0.log" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.174370 5034 generic.go:334] "Generic (PLEG): container finished" podID="691cc76e-ed89-4547-9bb1-58b03c8f7932" containerID="23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740" exitCode=1 Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.174393 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tsch6" event={"ID":"691cc76e-ed89-4547-9bb1-58b03c8f7932","Type":"ContainerDied","Data":"23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740"} Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.174689 5034 scope.go:117] "RemoveContainer" containerID="23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.185859 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.186462 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.186489 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.186497 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.186508 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.186517 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:54Z","lastTransitionTime":"2026-01-05T21:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.196282 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.214017 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.226445 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.239823 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:53Z\\\",\\\"message\\\":\\\"2026-01-05T21:52:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e\\\\n2026-01-05T21:52:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e to /host/opt/cni/bin/\\\\n2026-01-05T21:52:08Z [verbose] multus-daemon started\\\\n2026-01-05T21:52:08Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:52:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.257576 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"7-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794114 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0105 21:52:31.794130 6674 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0105 21:52:31.794228 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0105 21:52:31.794234 6674 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:31.794027 6674 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794192 6674 services_controller.go:356] Processing sync for service openshift-kube-apiserver/apiserver for network=default\\\\nF0105 21:52:31.794272 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.268031 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.276773 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.288849 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.288886 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.288896 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.288873 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.288910 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.289050 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:54Z","lastTransitionTime":"2026-01-05T21:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.299634 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.313787 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.325790 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.337280 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.348614 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.360670 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.370691 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.379595 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.387910 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02963c8-9ebf-4538-8cd7-003e496d1882\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:54Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.391285 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.391317 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.391325 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.391340 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.391349 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:54Z","lastTransitionTime":"2026-01-05T21:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.492992 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.493018 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.493026 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.493039 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.493047 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:54Z","lastTransitionTime":"2026-01-05T21:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.595068 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.595107 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.595115 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.595126 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.595134 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:54Z","lastTransitionTime":"2026-01-05T21:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.697226 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.697262 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.697273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.697288 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.697299 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:54Z","lastTransitionTime":"2026-01-05T21:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.799682 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.799709 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.799718 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.799729 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.799737 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:54Z","lastTransitionTime":"2026-01-05T21:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.837312 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:54 crc kubenswrapper[5034]: E0105 21:52:54.837441 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.837786 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:54 crc kubenswrapper[5034]: E0105 21:52:54.837862 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.837915 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:54 crc kubenswrapper[5034]: E0105 21:52:54.837965 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.901678 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.901714 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.901726 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.901739 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:54 crc kubenswrapper[5034]: I0105 21:52:54.901748 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:54Z","lastTransitionTime":"2026-01-05T21:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.004299 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.004335 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.004347 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.004361 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.004370 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:55Z","lastTransitionTime":"2026-01-05T21:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.106224 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.106283 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.106300 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.106323 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.106342 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:55Z","lastTransitionTime":"2026-01-05T21:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.179167 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tsch6_691cc76e-ed89-4547-9bb1-58b03c8f7932/kube-multus/0.log" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.179230 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tsch6" event={"ID":"691cc76e-ed89-4547-9bb1-58b03c8f7932","Type":"ContainerStarted","Data":"5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.190849 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02963c8-9ebf-4538-8cd7-003e496d1882\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.202988 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.208407 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.208440 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.208460 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.208473 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.208482 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:55Z","lastTransitionTime":"2026-01-05T21:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.218627 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.233336 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.254137 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.271796 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.283898 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.295477 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.308681 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.310299 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.310333 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.310341 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.310355 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.310365 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:55Z","lastTransitionTime":"2026-01-05T21:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.320749 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.332377 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:53Z\\\",\\\"message\\\":\\\"2026-01-05T21:52:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e\\\\n2026-01-05T21:52:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e to /host/opt/cni/bin/\\\\n2026-01-05T21:52:08Z [verbose] multus-daemon started\\\\n2026-01-05T21:52:08Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:52:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.348493 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"7-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794114 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0105 21:52:31.794130 6674 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0105 21:52:31.794228 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0105 21:52:31.794234 6674 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:31.794027 6674 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794192 6674 services_controller.go:356] Processing sync for service openshift-kube-apiserver/apiserver for network=default\\\\nF0105 21:52:31.794272 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.357546 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.365400 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.374856 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.384697 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.396429 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.404730 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.414565 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.414596 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.414607 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.414627 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.414638 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:55Z","lastTransitionTime":"2026-01-05T21:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.516869 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.516922 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.516935 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.516954 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.516966 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:55Z","lastTransitionTime":"2026-01-05T21:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.620202 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.620255 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.620265 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.620278 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.620288 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:55Z","lastTransitionTime":"2026-01-05T21:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.722505 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.722550 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.722560 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.722575 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.722586 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:55Z","lastTransitionTime":"2026-01-05T21:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.825477 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.825542 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.825565 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.825593 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.825613 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:55Z","lastTransitionTime":"2026-01-05T21:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.838283 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:55 crc kubenswrapper[5034]: E0105 21:52:55.838513 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.928068 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.928121 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.928130 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.928146 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:55 crc kubenswrapper[5034]: I0105 21:52:55.928155 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:55Z","lastTransitionTime":"2026-01-05T21:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.030788 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.030819 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.030828 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.030842 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.030851 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:56Z","lastTransitionTime":"2026-01-05T21:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.132816 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.132852 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.132860 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.132872 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.132880 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:56Z","lastTransitionTime":"2026-01-05T21:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.235695 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.235737 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.235750 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.235766 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.235775 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:56Z","lastTransitionTime":"2026-01-05T21:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.337900 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.337929 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.337939 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.337952 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.337962 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:56Z","lastTransitionTime":"2026-01-05T21:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.440391 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.440425 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.440436 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.440452 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.440465 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:56Z","lastTransitionTime":"2026-01-05T21:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.542981 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.543035 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.543047 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.543063 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.543090 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:56Z","lastTransitionTime":"2026-01-05T21:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.645316 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.645375 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.645392 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.645413 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.645429 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:56Z","lastTransitionTime":"2026-01-05T21:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.747693 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.747731 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.747743 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.747760 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.747771 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:56Z","lastTransitionTime":"2026-01-05T21:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.837919 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.837919 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.837938 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:56 crc kubenswrapper[5034]: E0105 21:52:56.838288 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:56 crc kubenswrapper[5034]: E0105 21:52:56.838427 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:56 crc kubenswrapper[5034]: E0105 21:52:56.838483 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.838541 5034 scope.go:117] "RemoveContainer" containerID="9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.849199 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.849232 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.849244 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.849258 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.849269 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:56Z","lastTransitionTime":"2026-01-05T21:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.951844 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.951871 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.951880 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.951904 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:56 crc kubenswrapper[5034]: I0105 21:52:56.951913 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:56Z","lastTransitionTime":"2026-01-05T21:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.054029 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.054067 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.054093 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.054109 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.054120 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:57Z","lastTransitionTime":"2026-01-05T21:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.156425 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.156464 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.156475 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.156492 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.156503 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:57Z","lastTransitionTime":"2026-01-05T21:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.190269 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/2.log" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.193072 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.193678 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.204258 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.216561 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.226797 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.240624 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.252619 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.269883 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.269934 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.270147 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.270177 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.270193 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:57Z","lastTransitionTime":"2026-01-05T21:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.277104 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.288977 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02963c8-9ebf-4538-8cd7-003e496d1882\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.306245 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.331534 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.365402 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.372498 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.372524 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.372533 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.372547 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.372558 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:57Z","lastTransitionTime":"2026-01-05T21:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.381693 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.398799 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.422181 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"7-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794114 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0105 21:52:31.794130 6674 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0105 21:52:31.794228 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0105 21:52:31.794234 6674 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:31.794027 6674 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794192 6674 services_controller.go:356] Processing sync for service openshift-kube-apiserver/apiserver for network=default\\\\nF0105 21:52:31.794272 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.432710 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.450036 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.463028 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.473747 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.474620 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.474671 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.474682 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.474721 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.474748 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:57Z","lastTransitionTime":"2026-01-05T21:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.486820 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:53Z\\\",\\\"message\\\":\\\"2026-01-05T21:52:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e\\\\n2026-01-05T21:52:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e to /host/opt/cni/bin/\\\\n2026-01-05T21:52:08Z [verbose] multus-daemon started\\\\n2026-01-05T21:52:08Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:52:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.576899 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.577164 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.577293 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.577413 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.577531 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:57Z","lastTransitionTime":"2026-01-05T21:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.679745 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.679778 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.679786 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.679799 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.679809 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:57Z","lastTransitionTime":"2026-01-05T21:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.781340 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.781374 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.781383 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.781397 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.781406 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:57Z","lastTransitionTime":"2026-01-05T21:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.838122 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:57 crc kubenswrapper[5034]: E0105 21:52:57.838236 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.850553 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.867483 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.879646 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.883156 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.883192 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.883202 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.883216 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.883227 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:57Z","lastTransitionTime":"2026-01-05T21:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.891669 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.907261 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"7-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794114 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0105 21:52:31.794130 6674 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0105 21:52:31.794228 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0105 21:52:31.794234 6674 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:31.794027 6674 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794192 6674 services_controller.go:356] Processing sync for service openshift-kube-apiserver/apiserver for network=default\\\\nF0105 21:52:31.794272 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.916100 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.925688 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.937238 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.948209 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.964542 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:53Z\\\",\\\"message\\\":\\\"2026-01-05T21:52:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e\\\\n2026-01-05T21:52:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e to /host/opt/cni/bin/\\\\n2026-01-05T21:52:08Z [verbose] multus-daemon started\\\\n2026-01-05T21:52:08Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:52:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.975816 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.984905 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.984940 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.984952 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.984969 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.984981 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:57Z","lastTransitionTime":"2026-01-05T21:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.985889 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:57 crc kubenswrapper[5034]: I0105 21:52:57.995990 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.007881 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.019225 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.027960 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.038921 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02963c8-9ebf-4538-8cd7-003e496d1882\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.050748 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.087529 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.087573 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.087585 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.087602 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.087613 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:58Z","lastTransitionTime":"2026-01-05T21:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.189355 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.189388 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.189396 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.189412 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.189424 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:58Z","lastTransitionTime":"2026-01-05T21:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.196684 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/3.log" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.198645 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/2.log" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.200977 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" exitCode=1 Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.201020 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483"} Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.201065 5034 scope.go:117] "RemoveContainer" containerID="9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.201650 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 21:52:58 crc kubenswrapper[5034]: E0105 21:52:58.201822 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.215981 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.227593 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.236703 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.247050 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:53Z\\\",\\\"message\\\":\\\"2026-01-05T21:52:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e\\\\n2026-01-05T21:52:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e to /host/opt/cni/bin/\\\\n2026-01-05T21:52:08Z [verbose] multus-daemon started\\\\n2026-01-05T21:52:08Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:52:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.261821 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb04511aa8173956aee9444e9396710edf9a0bb0232d87ced1af431fb96305a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:31Z\\\",\\\"message\\\":\\\"7-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794114 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0105 21:52:31.794130 6674 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0105 21:52:31.794228 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0105 21:52:31.794234 6674 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0105 21:52:31.794027 6674 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0105 21:52:31.794192 6674 services_controller.go:356] Processing sync for service openshift-kube-apiserver/apiserver for network=default\\\\nF0105 21:52:31.794272 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:58Z\\\",\\\"message\\\":\\\"105 21:52:57.872818 7055 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-hzbjx in node crc\\\\nI0105 21:52:57.872819 7055 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:52:57.872824 7055 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-hzbjx after 0 failed attempt(s)\\\\nF0105 21:52:57.872829 7055 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.271560 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.282974 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.291358 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.291401 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.291412 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.291428 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.291439 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:58Z","lastTransitionTime":"2026-01-05T21:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.292746 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.304292 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.313535 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.323059 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02963c8-9ebf-4538-8cd7-003e496d1882\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.333411 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.342151 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.354039 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.370160 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.382118 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.401523 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.406225 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.406263 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.406273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.406286 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.406295 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:58Z","lastTransitionTime":"2026-01-05T21:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.439457 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.508145 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.508189 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.508201 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.508215 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.508225 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:58Z","lastTransitionTime":"2026-01-05T21:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.610101 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.610130 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.610139 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.610153 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.610162 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:58Z","lastTransitionTime":"2026-01-05T21:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.712284 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.712327 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.712339 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.712356 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.712369 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:58Z","lastTransitionTime":"2026-01-05T21:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.814442 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.814761 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.814773 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.814810 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.814822 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:58Z","lastTransitionTime":"2026-01-05T21:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.837971 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.838054 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:52:58 crc kubenswrapper[5034]: E0105 21:52:58.838119 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:52:58 crc kubenswrapper[5034]: E0105 21:52:58.838174 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.838249 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:52:58 crc kubenswrapper[5034]: E0105 21:52:58.838301 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.918012 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.918364 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.918519 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.918726 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:58 crc kubenswrapper[5034]: I0105 21:52:58.918923 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:58Z","lastTransitionTime":"2026-01-05T21:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.020820 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.020851 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.020859 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.020872 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.020880 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:59Z","lastTransitionTime":"2026-01-05T21:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.123336 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.123376 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.123384 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.123398 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.123406 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:59Z","lastTransitionTime":"2026-01-05T21:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.205440 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/3.log" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.208977 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 21:52:59 crc kubenswrapper[5034]: E0105 21:52:59.209133 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.220139 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.225117 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.225147 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.225157 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.225172 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.225183 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:59Z","lastTransitionTime":"2026-01-05T21:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.239524 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.251072 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.261593 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.280517 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:58Z\\\",\\\"message\\\":\\\"105 21:52:57.872818 7055 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-hzbjx in node crc\\\\nI0105 21:52:57.872819 7055 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:52:57.872824 7055 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-hzbjx after 0 failed attempt(s)\\\\nF0105 21:52:57.872829 7055 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.290273 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.299977 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.310598 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.319794 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.327807 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.327857 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.327871 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.327884 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.327893 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:59Z","lastTransitionTime":"2026-01-05T21:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.331493 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:53Z\\\",\\\"message\\\":\\\"2026-01-05T21:52:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e\\\\n2026-01-05T21:52:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e to /host/opt/cni/bin/\\\\n2026-01-05T21:52:08Z [verbose] multus-daemon started\\\\n2026-01-05T21:52:08Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:52:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.341991 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.354340 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.364361 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.378201 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.388945 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.397245 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.406280 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02963c8-9ebf-4538-8cd7-003e496d1882\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.417321 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:59Z is after 2025-08-24T17:21:41Z" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.430216 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.430245 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.430271 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.430286 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.430297 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:59Z","lastTransitionTime":"2026-01-05T21:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.532029 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.532062 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.532071 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.532100 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.532121 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:59Z","lastTransitionTime":"2026-01-05T21:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.634735 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.634782 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.634794 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.634808 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.634817 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:59Z","lastTransitionTime":"2026-01-05T21:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.736597 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.736630 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.736638 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.736651 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.736660 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:59Z","lastTransitionTime":"2026-01-05T21:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.837466 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:52:59 crc kubenswrapper[5034]: E0105 21:52:59.837604 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.838614 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.838654 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.838666 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.838683 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.838694 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:59Z","lastTransitionTime":"2026-01-05T21:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.940854 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.940886 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.940895 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.940911 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:52:59 crc kubenswrapper[5034]: I0105 21:52:59.940920 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:52:59Z","lastTransitionTime":"2026-01-05T21:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.043164 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.043208 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.043218 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.043240 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.043254 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:00Z","lastTransitionTime":"2026-01-05T21:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.145991 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.146037 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.146047 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.146089 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.146104 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:00Z","lastTransitionTime":"2026-01-05T21:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.248661 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.248696 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.248705 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.248718 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.248727 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:00Z","lastTransitionTime":"2026-01-05T21:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.351341 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.351381 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.351389 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.351404 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.351414 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:00Z","lastTransitionTime":"2026-01-05T21:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.453353 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.453388 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.453396 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.453410 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.453419 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:00Z","lastTransitionTime":"2026-01-05T21:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.554711 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.554743 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.554752 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.554764 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.554773 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:00Z","lastTransitionTime":"2026-01-05T21:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.656993 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.657031 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.657041 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.657054 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.657066 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:00Z","lastTransitionTime":"2026-01-05T21:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.759294 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.759338 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.759347 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.759360 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.759369 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:00Z","lastTransitionTime":"2026-01-05T21:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.837962 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.837996 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:00 crc kubenswrapper[5034]: E0105 21:53:00.838117 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.838172 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:00 crc kubenswrapper[5034]: E0105 21:53:00.838333 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:00 crc kubenswrapper[5034]: E0105 21:53:00.838388 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.861708 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.861748 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.861759 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.861775 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.861787 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:00Z","lastTransitionTime":"2026-01-05T21:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.963978 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.964045 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.964055 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.964069 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:00 crc kubenswrapper[5034]: I0105 21:53:00.964101 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:00Z","lastTransitionTime":"2026-01-05T21:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.065881 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.065912 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.065920 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.065934 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.065943 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.168096 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.168223 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.168240 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.168258 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.168270 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.270704 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.270736 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.270745 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.270759 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.270769 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.373152 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.373200 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.373231 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.373246 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.373256 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.475445 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.475509 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.475526 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.475549 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.475565 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.577758 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.577810 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.577828 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.577849 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.577865 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.679535 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.679584 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.679596 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.679615 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.679627 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.782052 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.782111 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.782121 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.782135 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.782145 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.832978 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.833019 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.833031 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.833046 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.833058 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.837402 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:01 crc kubenswrapper[5034]: E0105 21:53:01.837502 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:01 crc kubenswrapper[5034]: E0105 21:53:01.847598 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:01Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.851406 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.851444 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.851456 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.851472 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.851484 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: E0105 21:53:01.866581 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:01Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.870406 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.870434 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.870444 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.870459 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.870469 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: E0105 21:53:01.889798 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:01Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.893200 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.893233 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.893243 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.893257 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.893267 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: E0105 21:53:01.906772 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:01Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.910921 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.910954 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.910965 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.910978 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.910989 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:01 crc kubenswrapper[5034]: E0105 21:53:01.929683 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:01Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:01 crc kubenswrapper[5034]: E0105 21:53:01.929832 5034 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.931190 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.931221 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.931232 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.931248 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:01 crc kubenswrapper[5034]: I0105 21:53:01.931260 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:01Z","lastTransitionTime":"2026-01-05T21:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.034179 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.034279 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.034290 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.034306 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.034317 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:02Z","lastTransitionTime":"2026-01-05T21:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.137447 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.137503 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.137523 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.137553 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.137575 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:02Z","lastTransitionTime":"2026-01-05T21:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.240829 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.240869 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.240882 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.240898 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.240910 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:02Z","lastTransitionTime":"2026-01-05T21:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.343869 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.343912 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.343926 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.343943 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.343957 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:02Z","lastTransitionTime":"2026-01-05T21:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.445673 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.445706 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.445715 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.445729 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.445740 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:02Z","lastTransitionTime":"2026-01-05T21:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.548787 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.548825 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.548836 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.548850 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.548861 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:02Z","lastTransitionTime":"2026-01-05T21:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.651751 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.651789 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.651802 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.651819 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.651849 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:02Z","lastTransitionTime":"2026-01-05T21:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.754882 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.754962 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.754985 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.755016 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.755037 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:02Z","lastTransitionTime":"2026-01-05T21:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.837867 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.837959 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.837867 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:02 crc kubenswrapper[5034]: E0105 21:53:02.838151 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:02 crc kubenswrapper[5034]: E0105 21:53:02.838310 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:02 crc kubenswrapper[5034]: E0105 21:53:02.838427 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.858328 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.858410 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.858431 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.858462 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.858484 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:02Z","lastTransitionTime":"2026-01-05T21:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.961073 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.961133 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.961146 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.961164 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:02 crc kubenswrapper[5034]: I0105 21:53:02.961176 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:02Z","lastTransitionTime":"2026-01-05T21:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.063619 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.063681 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.063693 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.063705 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.063714 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:03Z","lastTransitionTime":"2026-01-05T21:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.165938 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.165964 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.165971 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.165984 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.165992 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:03Z","lastTransitionTime":"2026-01-05T21:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.268176 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.268202 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.268210 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.268222 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.268232 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:03Z","lastTransitionTime":"2026-01-05T21:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.370019 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.370053 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.370064 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.370094 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.370103 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:03Z","lastTransitionTime":"2026-01-05T21:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.472217 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.472257 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.472266 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.472280 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.472292 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:03Z","lastTransitionTime":"2026-01-05T21:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.573865 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.573902 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.573911 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.573926 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.573935 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:03Z","lastTransitionTime":"2026-01-05T21:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.675963 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.676026 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.676036 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.676051 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.676060 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:03Z","lastTransitionTime":"2026-01-05T21:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.778275 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.778316 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.778331 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.778351 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.778370 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:03Z","lastTransitionTime":"2026-01-05T21:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.837560 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:03 crc kubenswrapper[5034]: E0105 21:53:03.837734 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.880548 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.880586 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.880595 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.880608 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.880617 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:03Z","lastTransitionTime":"2026-01-05T21:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.982851 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.982923 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.982936 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.982953 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:03 crc kubenswrapper[5034]: I0105 21:53:03.982965 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:03Z","lastTransitionTime":"2026-01-05T21:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.085655 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.085709 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.085722 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.085742 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.085755 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:04Z","lastTransitionTime":"2026-01-05T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.187739 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.187803 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.187812 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.187825 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.187835 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:04Z","lastTransitionTime":"2026-01-05T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.290604 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.290640 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.290651 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.290667 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.290678 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:04Z","lastTransitionTime":"2026-01-05T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.392823 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.392862 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.392871 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.392886 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.392895 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:04Z","lastTransitionTime":"2026-01-05T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.495353 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.495408 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.495420 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.495436 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.495446 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:04Z","lastTransitionTime":"2026-01-05T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.597601 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.597641 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.597649 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.597664 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.597675 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:04Z","lastTransitionTime":"2026-01-05T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.700783 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.700847 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.700872 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.700902 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.700924 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:04Z","lastTransitionTime":"2026-01-05T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.803596 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.803641 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.803656 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.803676 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.803691 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:04Z","lastTransitionTime":"2026-01-05T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.837984 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.838021 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.838106 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:04 crc kubenswrapper[5034]: E0105 21:53:04.838217 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:04 crc kubenswrapper[5034]: E0105 21:53:04.838305 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:04 crc kubenswrapper[5034]: E0105 21:53:04.838400 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.906191 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.906250 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.906265 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.906281 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:04 crc kubenswrapper[5034]: I0105 21:53:04.906293 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:04Z","lastTransitionTime":"2026-01-05T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.008187 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.008222 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.008231 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.008245 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.008257 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:05Z","lastTransitionTime":"2026-01-05T21:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.111019 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.111050 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.111059 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.111073 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.111103 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:05Z","lastTransitionTime":"2026-01-05T21:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.213532 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.213583 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.213592 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.213607 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.213616 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:05Z","lastTransitionTime":"2026-01-05T21:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.317444 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.317502 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.317514 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.317530 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.317539 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:05Z","lastTransitionTime":"2026-01-05T21:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.419729 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.419786 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.419802 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.419825 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.419844 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:05Z","lastTransitionTime":"2026-01-05T21:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.522196 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.522235 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.522244 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.522257 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.522265 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:05Z","lastTransitionTime":"2026-01-05T21:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.624615 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.624665 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.624682 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.624703 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.624721 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:05Z","lastTransitionTime":"2026-01-05T21:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.727276 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.727337 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.727357 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.727380 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.727398 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:05Z","lastTransitionTime":"2026-01-05T21:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.830665 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.830731 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.830748 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.830775 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.830792 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:05Z","lastTransitionTime":"2026-01-05T21:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.838428 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:05 crc kubenswrapper[5034]: E0105 21:53:05.838641 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.933411 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.933460 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.933474 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.933495 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:05 crc kubenswrapper[5034]: I0105 21:53:05.933510 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:05Z","lastTransitionTime":"2026-01-05T21:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.036071 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.036138 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.036151 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.036170 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.036182 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:06Z","lastTransitionTime":"2026-01-05T21:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.138330 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.138380 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.138393 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.138411 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.138423 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:06Z","lastTransitionTime":"2026-01-05T21:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.240839 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.240893 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.240908 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.240929 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.240943 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:06Z","lastTransitionTime":"2026-01-05T21:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.343375 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.343458 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.343484 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.343510 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.343526 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:06Z","lastTransitionTime":"2026-01-05T21:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.445985 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.446035 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.446051 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.446107 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.446145 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:06Z","lastTransitionTime":"2026-01-05T21:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.548322 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.548388 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.548401 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.548415 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.548426 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:06Z","lastTransitionTime":"2026-01-05T21:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.651052 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.651129 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.651148 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.651195 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.651210 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:06Z","lastTransitionTime":"2026-01-05T21:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.753615 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.753686 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.753711 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.753739 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.753759 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:06Z","lastTransitionTime":"2026-01-05T21:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.838003 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.838033 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.838052 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:06 crc kubenswrapper[5034]: E0105 21:53:06.838192 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:06 crc kubenswrapper[5034]: E0105 21:53:06.838299 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:06 crc kubenswrapper[5034]: E0105 21:53:06.838869 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.856197 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.856252 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.856265 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.856282 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.856295 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:06Z","lastTransitionTime":"2026-01-05T21:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.959290 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.959334 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.959346 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.959364 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:06 crc kubenswrapper[5034]: I0105 21:53:06.959375 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:06Z","lastTransitionTime":"2026-01-05T21:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.062422 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.062461 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.062469 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.062481 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.062492 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:07Z","lastTransitionTime":"2026-01-05T21:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.165457 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.165609 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.165633 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.165650 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.165662 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:07Z","lastTransitionTime":"2026-01-05T21:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.267789 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.267862 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.267885 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.268337 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.268597 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:07Z","lastTransitionTime":"2026-01-05T21:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.370656 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.370689 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.370697 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.370709 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.370718 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:07Z","lastTransitionTime":"2026-01-05T21:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.473591 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.473640 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.473653 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.473669 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.473681 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:07Z","lastTransitionTime":"2026-01-05T21:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.576425 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.576463 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.576472 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.576488 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.576497 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:07Z","lastTransitionTime":"2026-01-05T21:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.678304 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.678358 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.678367 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.678380 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.678390 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:07Z","lastTransitionTime":"2026-01-05T21:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.781296 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.781339 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.781351 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.781368 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.781377 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:07Z","lastTransitionTime":"2026-01-05T21:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.838133 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:07 crc kubenswrapper[5034]: E0105 21:53:07.838290 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.853615 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbbe677202ec86491cdcc85cc6fee6990c63f62eab4d5d97cab4151a6fdfef50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.866119 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd89329-d259-499c-bfe9-747d547d10f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da8899e0d13e659ed9dab3973c8e229ac3c874522fa500bedebf00bc10ef843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xwg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frlwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.882221 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tsch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"691cc76e-ed89-4547-9bb1-58b03c8f7932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:53Z\\\",\\\"message\\\":\\\"2026-01-05T21:52:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e\\\\n2026-01-05T21:52:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69eaafd5-5497-48ab-9c15-00483e5f835e to /host/opt/cni/bin/\\\\n2026-01-05T21:52:08Z [verbose] multus-daemon started\\\\n2026-01-05T21:52:08Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:52:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sg5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tsch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.883183 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.883220 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.883229 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.883244 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.883254 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:07Z","lastTransitionTime":"2026-01-05T21:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.901190 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"788e0f44-29c3-4c4a-afe9-33c26a965d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:52:58Z\\\",\\\"message\\\":\\\"105 21:52:57.872818 7055 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-hzbjx in node crc\\\\nI0105 21:52:57.872819 7055 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:52:57.872824 7055 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-hzbjx after 0 failed attempt(s)\\\\nF0105 21:52:57.872829 7055 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:52:57Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v7pr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fmfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.912139 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf4h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66c7fd4b-e058-43d1-9ffe-c0e35978e0ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d38a4519b634d197cca2af192bcdb79dcb71b7d76371d69455a89c206f6a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d8hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf4h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.923312 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-99zr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7949c792-bd35-4fb3-9235-402a13c61026\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlhvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-99zr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.939555 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.954472 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13adffb8208e0eb44cee66524af9271cec29f87d01c6df33464015885a052b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.972593 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95tx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e658f8f-1b88-4076-92a9-dd1ebeca6bd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f3bdd8d03593dbd3da939e5a3ab731b7a6a014a35443106048b4909d301683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c26fe362707266a947e621bad3cedd5031cdfbd53a98a061a75232eb1611184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988baf82d636897efea9c2ec2b9bca9adc012fe6aba70d99f5ed0777caedf4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd0f027c249d8be06101dd9252f2cf9faa3de8f104bbe470f5344f08d635bc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://310d5ba8f3479771eaabdaf7da64d59a8587b6520ac433fc1e1cba29388c3e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606f42685e186d0d828eaace74080acccd1f59c9c607f4a7277935fc5722485e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2b68ca56508e706606351eecd1a9fcc8911a5713060543bd8d60a2923c260d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:52:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:52:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wbdb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95tx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.984330 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da6626bb-3c1d-4149-911b-32b988ab216c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9700eb6feac45d398fee65dc8ed76907491e9171ee471999c510e02a2590a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf2ed2017960dc07db142fb60b16a77cc089733b72ca65648875f0ee9f7ed54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jchpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l4hpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.986182 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.986211 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.986220 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.986232 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.986240 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:07Z","lastTransitionTime":"2026-01-05T21:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:07 crc kubenswrapper[5034]: I0105 21:53:07.995071 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02963c8-9ebf-4538-8cd7-003e496d1882\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a604f6106133aa3f1da3d4c26abf9b4ac98d5cef5cc3bffcae1e1e393398a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a026484a49d68a309424a88356bdeab063e664676c367c3e91286db80f10f8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f3a1a087a1aa6780239c549edac3ac26b318d7e04d50631f05025aaee4285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c54f100dbd7eaa1ef50957c966309a9cb580d9956bcd1523f3a97af60e0a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.005611 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.017211 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.026435 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hzbjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33d2b819-50a6-427b-8503-a87d0fafc058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36617c5900a60962832bea9e4459aa32dc31f865bd7579a8bb69a9ae22535a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7jhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:52:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hzbjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.042978 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"107ea35c-ebef-46de-bc83-16671a73816a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b6837d45ba092566a58d900535b6a4c7fd16a830963d2e0979f0ab159062c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1bb9f44f47b71924116dac8e48fb53c88fe17920df3f89e84cb908903b514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e14307172e32faa0e8a38f4b5071a45edf17804a5fee3d23864d0216235a7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c18715e84411838cfd5b28b567b6d8cee34349577ac8d767b542c24f617b40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a267061f1c5861bcec2eb39a923e33b8be7510c9c0ed4e176773c15e393996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://921fa597cd58deb8d284052975fb28a448d0609520ce845c372284794245b7e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081ed91e952b4ef1078381101287937570c99e19df46023be2f4520578e258cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe6d286f1e876cf5ca3771a97c9a5a92cba241511736ffcf99dd1c75cf1289c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.061289 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03bfa87a-54ff-4b62-93fc-cd9081c177e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:52:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0105 21:52:01.108270 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:52:01.109156 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3161291694/tls.crt::/tmp/serving-cert-3161291694/tls.key\\\\\\\"\\\\nI0105 21:52:06.595978 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:52:06.599525 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:52:06.599572 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:52:06.599595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:52:06.599600 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:52:06.604099 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0105 21:52:06.604114 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0105 21:52:06.604127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:52:06.604139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:52:06.604142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:52:06.604146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:52:06.604150 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0105 21:52:06.605599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.073546 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a80c44-5162-426c-acb2-7a3feff8198a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:51:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f317279d97b553aa95cbeb8aa579d726adaf25b4e25cb20cfbafbe6a6cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e950449a548269da332c22e0d4110ffbda90a70091978d4f5d13a66431c84a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34b2425b163469ee94fa3f6d53bb28b3d2cc75f8920ea98ef12ecd3f2be774d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:51:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.085056 5034 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:52:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518acc30a7d8a2cd6ee1c510564773d1d61ee1a8381b686f120146229cd7d93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbfb7ce9382ddf4827074eec22cf75e47ebb72b9e0f1b948e2acb42bb9f0316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:08Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.088547 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.088586 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.088596 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.088610 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.088620 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:08Z","lastTransitionTime":"2026-01-05T21:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.190935 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.190978 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.190991 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.191005 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.191034 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:08Z","lastTransitionTime":"2026-01-05T21:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.293061 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.293141 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.293154 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.293172 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.293214 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:08Z","lastTransitionTime":"2026-01-05T21:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.395050 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.395111 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.395126 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.395143 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.395154 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:08Z","lastTransitionTime":"2026-01-05T21:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.497677 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.497702 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.497710 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.497722 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.497730 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:08Z","lastTransitionTime":"2026-01-05T21:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.600063 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.600127 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.600138 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.600152 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.600163 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:08Z","lastTransitionTime":"2026-01-05T21:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.702935 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.702979 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.702990 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.703009 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.703021 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:08Z","lastTransitionTime":"2026-01-05T21:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.805350 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.805385 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.805397 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.805412 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.805424 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:08Z","lastTransitionTime":"2026-01-05T21:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.908003 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.908043 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.908055 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.908072 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:08 crc kubenswrapper[5034]: I0105 21:53:08.908098 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:08Z","lastTransitionTime":"2026-01-05T21:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.010627 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.010662 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.010673 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.010688 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.010699 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:09Z","lastTransitionTime":"2026-01-05T21:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.112940 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.112970 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.112977 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.112990 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.112998 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:09Z","lastTransitionTime":"2026-01-05T21:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.214967 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.215014 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.215024 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.215036 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.215043 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:09Z","lastTransitionTime":"2026-01-05T21:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.317905 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.317932 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.317941 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.317952 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.317961 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:09Z","lastTransitionTime":"2026-01-05T21:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.420003 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.420031 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.420041 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.420054 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.420062 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:09Z","lastTransitionTime":"2026-01-05T21:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.522441 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.522471 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.522479 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.522671 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.522681 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:09Z","lastTransitionTime":"2026-01-05T21:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.624995 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.625033 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.625044 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.625059 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.625071 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:09Z","lastTransitionTime":"2026-01-05T21:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.659632 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:09 crc kubenswrapper[5034]: E0105 21:53:09.659811 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.660063 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.660241 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.660398 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:09 crc kubenswrapper[5034]: E0105 21:53:09.660274 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:09 crc kubenswrapper[5034]: E0105 21:53:09.660525 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:09 crc kubenswrapper[5034]: E0105 21:53:09.661059 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.727743 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.727808 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.727823 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.727850 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.727865 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:09Z","lastTransitionTime":"2026-01-05T21:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.830861 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.830939 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.830959 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.830985 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.831006 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:09Z","lastTransitionTime":"2026-01-05T21:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.933943 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.934024 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.934043 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.934131 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:09 crc kubenswrapper[5034]: I0105 21:53:09.934175 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:09Z","lastTransitionTime":"2026-01-05T21:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.036953 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.036999 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.037008 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.037024 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.037033 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:10Z","lastTransitionTime":"2026-01-05T21:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.139729 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.139777 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.139786 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.139798 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.139807 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:10Z","lastTransitionTime":"2026-01-05T21:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.243273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.243342 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.243356 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.243372 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.243384 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:10Z","lastTransitionTime":"2026-01-05T21:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.346590 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.346636 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.346645 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.346662 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.346674 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:10Z","lastTransitionTime":"2026-01-05T21:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.449876 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.449963 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.449985 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.450039 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.450059 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:10Z","lastTransitionTime":"2026-01-05T21:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.552844 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.552900 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.552913 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.552932 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.552948 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:10Z","lastTransitionTime":"2026-01-05T21:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.655712 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.655771 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.655798 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.655819 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.655836 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:10Z","lastTransitionTime":"2026-01-05T21:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.684768 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.684967 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.684940194 +0000 UTC m=+147.056939673 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.758173 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.758208 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.758220 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.758234 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.758245 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:10Z","lastTransitionTime":"2026-01-05T21:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.785866 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.785919 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.785948 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.785972 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786043 5034 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786099 5034 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786138 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786149 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.786129641 +0000 UTC m=+147.158129100 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786165 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786169 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.786159772 +0000 UTC m=+147.158159221 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786177 5034 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786246 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.786234394 +0000 UTC m=+147.158233823 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786278 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786327 5034 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786349 5034 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.786444 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.786418659 +0000 UTC m=+147.158418138 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.838917 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 21:53:10 crc kubenswrapper[5034]: E0105 21:53:10.839102 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.860041 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.860113 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.860128 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.860145 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.860157 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:10Z","lastTransitionTime":"2026-01-05T21:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.961946 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.962007 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.962016 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.962031 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:10 crc kubenswrapper[5034]: I0105 21:53:10.962039 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:10Z","lastTransitionTime":"2026-01-05T21:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.064230 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.064263 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.064274 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.064289 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.064301 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:11Z","lastTransitionTime":"2026-01-05T21:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.167256 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.167329 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.167352 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.167374 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.167391 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:11Z","lastTransitionTime":"2026-01-05T21:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.269733 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.269767 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.269776 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.269791 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.269802 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:11Z","lastTransitionTime":"2026-01-05T21:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.372402 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.372454 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.372465 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.372484 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.372500 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:11Z","lastTransitionTime":"2026-01-05T21:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.475505 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.475569 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.475587 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.475609 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.475627 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:11Z","lastTransitionTime":"2026-01-05T21:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.578733 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.578766 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.578775 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.578788 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.578797 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:11Z","lastTransitionTime":"2026-01-05T21:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.680911 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.680951 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.680963 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.680977 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.680988 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:11Z","lastTransitionTime":"2026-01-05T21:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.783632 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.783672 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.783687 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.783708 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.783723 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:11Z","lastTransitionTime":"2026-01-05T21:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.837599 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.837632 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.837704 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:11 crc kubenswrapper[5034]: E0105 21:53:11.838049 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.838264 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:11 crc kubenswrapper[5034]: E0105 21:53:11.838406 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:11 crc kubenswrapper[5034]: E0105 21:53:11.838468 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:11 crc kubenswrapper[5034]: E0105 21:53:11.838571 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.885654 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.885686 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.885695 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.885708 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.885720 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:11Z","lastTransitionTime":"2026-01-05T21:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.989174 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.989256 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.989276 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.989310 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:11 crc kubenswrapper[5034]: I0105 21:53:11.989333 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:11Z","lastTransitionTime":"2026-01-05T21:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.092620 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.092691 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.092724 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.092750 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.092769 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.099649 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.099700 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.099723 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.099751 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.099771 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: E0105 21:53:12.114745 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.119389 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.119486 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.119510 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.119545 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.119567 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: E0105 21:53:12.132622 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.136893 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.136940 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.136959 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.136985 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.137003 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: E0105 21:53:12.151329 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.156749 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.156831 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.156850 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.156878 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.156898 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: E0105 21:53:12.176329 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.180622 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.180659 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.180669 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.180888 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.180899 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: E0105 21:53:12.195749 5034 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:53:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c744546f-b651-4674-9e81-ae7afa931a00\\\",\\\"systemUUID\\\":\\\"098a0fe6-e384-4f14-835d-619afd5e29b6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:53:12Z is after 2025-08-24T17:21:41Z" Jan 05 21:53:12 crc kubenswrapper[5034]: E0105 21:53:12.195915 5034 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.197534 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.197569 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.197580 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.197598 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.197610 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.299473 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.299515 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.299526 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.299540 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.299551 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.401610 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.401644 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.401653 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.401665 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.401674 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.504260 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.504311 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.504325 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.504339 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.504351 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.606121 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.606161 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.606173 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.606188 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.606204 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.708323 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.708377 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.708390 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.708410 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.708423 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.811281 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.811321 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.811332 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.811347 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.811358 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.913960 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.913995 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.914007 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.914022 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:12 crc kubenswrapper[5034]: I0105 21:53:12.914033 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:12Z","lastTransitionTime":"2026-01-05T21:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.016547 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.016572 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.016581 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.016592 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.016601 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:13Z","lastTransitionTime":"2026-01-05T21:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.118834 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.118884 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.118901 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.118923 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.118938 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:13Z","lastTransitionTime":"2026-01-05T21:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.220842 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.220872 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.220882 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.220894 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.220903 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:13Z","lastTransitionTime":"2026-01-05T21:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.322885 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.322925 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.322933 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.322949 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.322959 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:13Z","lastTransitionTime":"2026-01-05T21:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.425557 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.425593 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.425603 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.425618 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.425628 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:13Z","lastTransitionTime":"2026-01-05T21:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.528074 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.528214 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.528231 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.528339 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.528356 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:13Z","lastTransitionTime":"2026-01-05T21:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.631323 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.631604 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.631691 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.631783 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.631861 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:13Z","lastTransitionTime":"2026-01-05T21:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.735237 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.735321 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.735341 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.735371 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.735388 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:13Z","lastTransitionTime":"2026-01-05T21:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.837344 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.837439 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:13 crc kubenswrapper[5034]: E0105 21:53:13.837459 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.837506 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.837555 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:13 crc kubenswrapper[5034]: E0105 21:53:13.837566 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.837580 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.837620 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.837630 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:13Z","lastTransitionTime":"2026-01-05T21:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.837679 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.837712 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:13 crc kubenswrapper[5034]: E0105 21:53:13.837727 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:13 crc kubenswrapper[5034]: E0105 21:53:13.837905 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.940976 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.941105 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.941126 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.941150 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:13 crc kubenswrapper[5034]: I0105 21:53:13.941166 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:13Z","lastTransitionTime":"2026-01-05T21:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.043629 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.043668 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.043676 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.043690 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.043699 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:14Z","lastTransitionTime":"2026-01-05T21:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.146765 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.146809 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.146817 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.146831 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.146846 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:14Z","lastTransitionTime":"2026-01-05T21:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.248716 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.248761 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.248773 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.248789 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.248800 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:14Z","lastTransitionTime":"2026-01-05T21:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.351236 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.351607 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.351762 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.351904 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.352135 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:14Z","lastTransitionTime":"2026-01-05T21:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.454798 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.455274 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.455348 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.455412 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.455485 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:14Z","lastTransitionTime":"2026-01-05T21:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.557967 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.558010 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.558020 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.558035 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.558045 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:14Z","lastTransitionTime":"2026-01-05T21:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.660527 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.660573 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.660587 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.660603 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.660615 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:14Z","lastTransitionTime":"2026-01-05T21:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.763287 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.763328 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.763337 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.763353 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.763365 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:14Z","lastTransitionTime":"2026-01-05T21:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.865322 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.865382 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.865394 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.865408 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.865418 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:14Z","lastTransitionTime":"2026-01-05T21:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.967927 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.967971 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.967981 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.967993 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:14 crc kubenswrapper[5034]: I0105 21:53:14.968002 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:14Z","lastTransitionTime":"2026-01-05T21:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.070835 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.070876 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.070889 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.070904 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.070916 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:15Z","lastTransitionTime":"2026-01-05T21:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.173492 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.173526 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.173535 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.173548 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.173558 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:15Z","lastTransitionTime":"2026-01-05T21:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.276062 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.276120 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.276132 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.276148 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.276158 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:15Z","lastTransitionTime":"2026-01-05T21:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.378637 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.378684 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.378693 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.378708 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.378717 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:15Z","lastTransitionTime":"2026-01-05T21:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.480917 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.480956 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.480964 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.480979 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.480992 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:15Z","lastTransitionTime":"2026-01-05T21:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.583129 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.583164 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.583172 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.583188 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.583198 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:15Z","lastTransitionTime":"2026-01-05T21:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.684975 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.685020 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.685032 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.685047 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.685058 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:15Z","lastTransitionTime":"2026-01-05T21:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.786868 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.786904 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.786917 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.786932 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.786944 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:15Z","lastTransitionTime":"2026-01-05T21:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.837537 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.837573 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.837582 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:15 crc kubenswrapper[5034]: E0105 21:53:15.837667 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.837536 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:15 crc kubenswrapper[5034]: E0105 21:53:15.837746 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:15 crc kubenswrapper[5034]: E0105 21:53:15.837953 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:15 crc kubenswrapper[5034]: E0105 21:53:15.838108 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.889211 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.889247 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.889255 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.889270 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.889289 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:15Z","lastTransitionTime":"2026-01-05T21:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.991622 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.991661 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.991671 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.991687 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:15 crc kubenswrapper[5034]: I0105 21:53:15.991696 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:15Z","lastTransitionTime":"2026-01-05T21:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.093918 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.093962 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.093972 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.093988 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.093999 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:16Z","lastTransitionTime":"2026-01-05T21:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.196108 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.196136 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.196145 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.196159 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.196170 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:16Z","lastTransitionTime":"2026-01-05T21:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.298104 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.298132 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.298142 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.298155 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.298231 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:16Z","lastTransitionTime":"2026-01-05T21:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.400294 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.400341 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.400350 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.400364 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.400374 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:16Z","lastTransitionTime":"2026-01-05T21:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.502154 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.502188 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.502198 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.502213 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.502225 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:16Z","lastTransitionTime":"2026-01-05T21:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.604267 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.604298 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.604306 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.604320 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.604329 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:16Z","lastTransitionTime":"2026-01-05T21:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.705829 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.705881 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.705897 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.705918 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.705934 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:16Z","lastTransitionTime":"2026-01-05T21:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.808938 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.808997 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.809012 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.809032 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.809048 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:16Z","lastTransitionTime":"2026-01-05T21:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.911633 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.911682 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.911694 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.911711 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:16 crc kubenswrapper[5034]: I0105 21:53:16.911722 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:16Z","lastTransitionTime":"2026-01-05T21:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.014529 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.014563 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.014571 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.014586 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.014595 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:17Z","lastTransitionTime":"2026-01-05T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.117230 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.117268 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.117278 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.117292 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.117339 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:17Z","lastTransitionTime":"2026-01-05T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.219981 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.220025 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.220036 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.220052 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.220068 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:17Z","lastTransitionTime":"2026-01-05T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.323031 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.323072 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.323108 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.323124 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.323136 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:17Z","lastTransitionTime":"2026-01-05T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.425757 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.425797 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.425809 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.425826 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.425836 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:17Z","lastTransitionTime":"2026-01-05T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.527989 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.528043 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.528061 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.528108 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.528125 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:17Z","lastTransitionTime":"2026-01-05T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.630784 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.630830 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.630849 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.630868 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.630881 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:17Z","lastTransitionTime":"2026-01-05T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.733577 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.733619 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.733644 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.733663 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.733678 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:17Z","lastTransitionTime":"2026-01-05T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.835925 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.835963 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.835972 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.835986 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.835995 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:17Z","lastTransitionTime":"2026-01-05T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.838391 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.838497 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.838526 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.838529 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:17 crc kubenswrapper[5034]: E0105 21:53:17.838641 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:17 crc kubenswrapper[5034]: E0105 21:53:17.838737 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:17 crc kubenswrapper[5034]: E0105 21:53:17.838805 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:17 crc kubenswrapper[5034]: E0105 21:53:17.838891 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.848308 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.882198 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podStartSLOduration=71.882180685 podStartE2EDuration="1m11.882180685s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:17.871031905 +0000 UTC m=+90.243031344" watchObservedRunningTime="2026-01-05 21:53:17.882180685 +0000 UTC m=+90.254180124" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.882405 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tsch6" podStartSLOduration=71.882401871 podStartE2EDuration="1m11.882401871s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:17.882351499 +0000 UTC m=+90.254350938" watchObservedRunningTime="2026-01-05 21:53:17.882401871 +0000 UTC m=+90.254401310" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.915007 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lf4h2" podStartSLOduration=71.914994055 podStartE2EDuration="1m11.914994055s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:17.914642165 +0000 UTC m=+90.286641604" watchObservedRunningTime="2026-01-05 21:53:17.914994055 +0000 UTC m=+90.286993494" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.939109 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.939342 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.939409 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.939481 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.939555 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:17Z","lastTransitionTime":"2026-01-05T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.964590 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-95tx4" podStartSLOduration=71.96457404 podStartE2EDuration="1m11.96457404s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:17.963364437 +0000 UTC m=+90.335363886" watchObservedRunningTime="2026-01-05 21:53:17.96457404 +0000 UTC m=+90.336573489" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.978052 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l4hpw" podStartSLOduration=70.978035644 podStartE2EDuration="1m10.978035644s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:17.977419267 +0000 UTC m=+90.349418716" watchObservedRunningTime="2026-01-05 21:53:17.978035644 +0000 UTC m=+90.350035083" Jan 05 21:53:17 crc kubenswrapper[5034]: I0105 21:53:17.992194 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=39.992178536 podStartE2EDuration="39.992178536s" podCreationTimestamp="2026-01-05 21:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:17.991669542 +0000 UTC m=+90.363668981" watchObservedRunningTime="2026-01-05 21:53:17.992178536 +0000 UTC m=+90.364177975" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.024891 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hzbjx" podStartSLOduration=72.024871553 podStartE2EDuration="1m12.024871553s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:18.024510233 +0000 UTC m=+90.396509662" watchObservedRunningTime="2026-01-05 21:53:18.024871553 +0000 UTC m=+90.396870992" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.041660 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.041702 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.041712 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.041729 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.041741 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:18Z","lastTransitionTime":"2026-01-05T21:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.050058 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=68.050038091 podStartE2EDuration="1m8.050038091s" podCreationTimestamp="2026-01-05 21:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:18.049386923 +0000 UTC m=+90.421386372" watchObservedRunningTime="2026-01-05 21:53:18.050038091 +0000 UTC m=+90.422037530" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.087720 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.087698116 podStartE2EDuration="1m9.087698116s" podCreationTimestamp="2026-01-05 21:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:18.08674607 +0000 UTC m=+90.458745509" watchObservedRunningTime="2026-01-05 21:53:18.087698116 +0000 UTC m=+90.459697555" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.087957 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.087946003 podStartE2EDuration="1m12.087946003s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:18.068624607 +0000 UTC m=+90.440624066" watchObservedRunningTime="2026-01-05 21:53:18.087946003 +0000 UTC m=+90.459945452" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.143807 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.143852 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.143862 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.143878 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.143890 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:18Z","lastTransitionTime":"2026-01-05T21:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.246073 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.246148 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.246157 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.246172 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.246181 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:18Z","lastTransitionTime":"2026-01-05T21:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.348779 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.348812 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.348820 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.348834 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.348844 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:18Z","lastTransitionTime":"2026-01-05T21:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.451428 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.451492 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.451512 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.451544 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.451564 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:18Z","lastTransitionTime":"2026-01-05T21:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.554565 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.554646 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.554668 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.554699 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.554719 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:18Z","lastTransitionTime":"2026-01-05T21:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.656687 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.656757 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.656769 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.656783 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.656792 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:18Z","lastTransitionTime":"2026-01-05T21:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.758915 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.759239 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.759259 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.759273 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.759281 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:18Z","lastTransitionTime":"2026-01-05T21:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.861162 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.861195 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.861204 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.861217 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.861230 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:18Z","lastTransitionTime":"2026-01-05T21:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.963977 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.964017 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.964027 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.964043 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:18 crc kubenswrapper[5034]: I0105 21:53:18.964053 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:18Z","lastTransitionTime":"2026-01-05T21:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.066653 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.066701 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.066712 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.066727 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.066737 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:19Z","lastTransitionTime":"2026-01-05T21:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.169171 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.169204 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.169213 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.169225 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.169234 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:19Z","lastTransitionTime":"2026-01-05T21:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.272137 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.272193 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.272204 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.272217 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.272228 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:19Z","lastTransitionTime":"2026-01-05T21:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.375023 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.375125 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.375147 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.375174 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.375192 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:19Z","lastTransitionTime":"2026-01-05T21:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.477203 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.477254 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.477269 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.477292 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.477309 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:19Z","lastTransitionTime":"2026-01-05T21:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.579685 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.579735 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.579747 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.579767 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.579779 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:19Z","lastTransitionTime":"2026-01-05T21:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.682862 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.682942 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.682970 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.683003 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.683029 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:19Z","lastTransitionTime":"2026-01-05T21:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.784902 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.784942 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.784952 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.784968 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.784992 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:19Z","lastTransitionTime":"2026-01-05T21:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.838136 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.838199 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.838204 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.838226 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:19 crc kubenswrapper[5034]: E0105 21:53:19.838337 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:19 crc kubenswrapper[5034]: E0105 21:53:19.838480 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:19 crc kubenswrapper[5034]: E0105 21:53:19.838587 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:19 crc kubenswrapper[5034]: E0105 21:53:19.838650 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.887306 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.887362 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.887379 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.887401 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.887416 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:19Z","lastTransitionTime":"2026-01-05T21:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.989628 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.989677 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.989690 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.989707 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:19 crc kubenswrapper[5034]: I0105 21:53:19.989719 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:19Z","lastTransitionTime":"2026-01-05T21:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.091748 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.091785 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.091794 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.091806 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.091814 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:20Z","lastTransitionTime":"2026-01-05T21:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.194818 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.194861 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.194872 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.194900 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.194914 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:20Z","lastTransitionTime":"2026-01-05T21:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.296547 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.296594 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.296606 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.296623 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.296650 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:20Z","lastTransitionTime":"2026-01-05T21:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.398927 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.398964 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.398973 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.398986 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.398999 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:20Z","lastTransitionTime":"2026-01-05T21:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.502039 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.502119 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.502137 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.502160 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.502180 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:20Z","lastTransitionTime":"2026-01-05T21:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.604706 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.604764 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.604782 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.604816 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.604868 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:20Z","lastTransitionTime":"2026-01-05T21:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.707037 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.707103 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.707113 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.707126 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.707136 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:20Z","lastTransitionTime":"2026-01-05T21:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.809179 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.809210 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.809220 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.809233 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.809242 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:20Z","lastTransitionTime":"2026-01-05T21:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.911379 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.911414 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.911425 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.911443 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:20 crc kubenswrapper[5034]: I0105 21:53:20.911454 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:20Z","lastTransitionTime":"2026-01-05T21:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.014016 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.014060 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.014071 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.014101 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.014112 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:21Z","lastTransitionTime":"2026-01-05T21:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.116135 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.116181 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.116191 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.116205 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.116216 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:21Z","lastTransitionTime":"2026-01-05T21:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.218522 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.218777 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.218845 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.218920 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.218994 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:21Z","lastTransitionTime":"2026-01-05T21:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.321282 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.321336 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.321351 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.321370 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.321381 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:21Z","lastTransitionTime":"2026-01-05T21:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.423477 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.423916 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.424209 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.424428 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.424605 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:21Z","lastTransitionTime":"2026-01-05T21:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.526676 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.526717 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.526726 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.526742 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.526750 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:21Z","lastTransitionTime":"2026-01-05T21:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.628770 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.628841 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.628868 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.628896 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.628910 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:21Z","lastTransitionTime":"2026-01-05T21:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.731053 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.731421 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.731659 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.731891 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.732113 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:21Z","lastTransitionTime":"2026-01-05T21:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.835162 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.835197 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.835206 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.835220 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.835230 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:21Z","lastTransitionTime":"2026-01-05T21:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.837352 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.837395 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.837438 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:21 crc kubenswrapper[5034]: E0105 21:53:21.837439 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.837396 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:21 crc kubenswrapper[5034]: E0105 21:53:21.837497 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:21 crc kubenswrapper[5034]: E0105 21:53:21.837650 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:21 crc kubenswrapper[5034]: E0105 21:53:21.837736 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.937260 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.937324 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.937341 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.937364 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:21 crc kubenswrapper[5034]: I0105 21:53:21.937381 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:21Z","lastTransitionTime":"2026-01-05T21:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.039411 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.039449 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.039459 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.039473 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.039484 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:22Z","lastTransitionTime":"2026-01-05T21:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.141531 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.141606 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.141646 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.141677 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.141697 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:22Z","lastTransitionTime":"2026-01-05T21:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.243767 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.243806 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.243819 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.243834 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.243846 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:22Z","lastTransitionTime":"2026-01-05T21:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.346120 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.346163 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.346182 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.346200 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.346210 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:22Z","lastTransitionTime":"2026-01-05T21:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.388432 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.388473 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.388483 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.388498 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.388509 5034 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:53:22Z","lastTransitionTime":"2026-01-05T21:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.424060 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm"] Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.424414 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.426237 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.426345 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.426521 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.428500 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.455417 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.455396886 podStartE2EDuration="5.455396886s" podCreationTimestamp="2026-01-05 21:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:22.454922453 +0000 UTC m=+94.826921882" watchObservedRunningTime="2026-01-05 21:53:22.455396886 +0000 UTC m=+94.827396335" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.503402 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.503478 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.503549 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.503590 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.503650 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.604469 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.604525 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.604569 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.604605 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.604634 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.604718 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.604610 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.605745 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.616994 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.619388 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n2kkm\" (UID: \"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: I0105 21:53:22.737538 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" Jan 05 21:53:22 crc kubenswrapper[5034]: W0105 21:53:22.750955 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4031f8dc_4fb6_4e0f_aadf_cef1a4a8d282.slice/crio-9e4e1cdc3cf5e4655d2b5f5d6b7c3f953be424d7ed588fd5a7a526b17a6a6608 WatchSource:0}: Error finding container 9e4e1cdc3cf5e4655d2b5f5d6b7c3f953be424d7ed588fd5a7a526b17a6a6608: Status 404 returned error can't find the container with id 9e4e1cdc3cf5e4655d2b5f5d6b7c3f953be424d7ed588fd5a7a526b17a6a6608 Jan 05 21:53:23 crc kubenswrapper[5034]: I0105 21:53:23.699270 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" event={"ID":"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282","Type":"ContainerStarted","Data":"b8dc86f1cd72b6b7887ad3a5957b49e4861531fce7b4c82fb9184e8172eee0a0"} Jan 05 21:53:23 crc kubenswrapper[5034]: I0105 21:53:23.699330 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" event={"ID":"4031f8dc-4fb6-4e0f-aadf-cef1a4a8d282","Type":"ContainerStarted","Data":"9e4e1cdc3cf5e4655d2b5f5d6b7c3f953be424d7ed588fd5a7a526b17a6a6608"} Jan 05 21:53:23 crc kubenswrapper[5034]: I0105 21:53:23.711137 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2kkm" podStartSLOduration=77.711119124 podStartE2EDuration="1m17.711119124s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:23.710957419 +0000 UTC m=+96.082956848" watchObservedRunningTime="2026-01-05 21:53:23.711119124 +0000 UTC m=+96.083118563" Jan 05 21:53:23 crc kubenswrapper[5034]: I0105 21:53:23.838090 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:23 crc kubenswrapper[5034]: I0105 21:53:23.838279 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:23 crc kubenswrapper[5034]: E0105 21:53:23.838345 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:23 crc kubenswrapper[5034]: I0105 21:53:23.838352 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:23 crc kubenswrapper[5034]: I0105 21:53:23.838525 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:23 crc kubenswrapper[5034]: E0105 21:53:23.838718 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:23 crc kubenswrapper[5034]: E0105 21:53:23.838777 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:23 crc kubenswrapper[5034]: E0105 21:53:23.838845 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:23 crc kubenswrapper[5034]: I0105 21:53:23.839061 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 21:53:23 crc kubenswrapper[5034]: E0105 21:53:23.839251 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" Jan 05 21:53:25 crc kubenswrapper[5034]: I0105 21:53:25.330388 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:25 crc kubenswrapper[5034]: E0105 21:53:25.330580 5034 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:53:25 crc kubenswrapper[5034]: E0105 21:53:25.330673 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs podName:7949c792-bd35-4fb3-9235-402a13c61026 nodeName:}" failed. No retries permitted until 2026-01-05 21:54:29.330651655 +0000 UTC m=+161.702651164 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs") pod "network-metrics-daemon-99zr4" (UID: "7949c792-bd35-4fb3-9235-402a13c61026") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:53:25 crc kubenswrapper[5034]: I0105 21:53:25.837803 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:25 crc kubenswrapper[5034]: I0105 21:53:25.837854 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:25 crc kubenswrapper[5034]: I0105 21:53:25.837894 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:25 crc kubenswrapper[5034]: E0105 21:53:25.837927 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:25 crc kubenswrapper[5034]: I0105 21:53:25.837945 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:25 crc kubenswrapper[5034]: E0105 21:53:25.838013 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:25 crc kubenswrapper[5034]: E0105 21:53:25.838092 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:25 crc kubenswrapper[5034]: E0105 21:53:25.838141 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:27 crc kubenswrapper[5034]: I0105 21:53:27.838341 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:27 crc kubenswrapper[5034]: I0105 21:53:27.838366 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:27 crc kubenswrapper[5034]: I0105 21:53:27.838383 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:27 crc kubenswrapper[5034]: I0105 21:53:27.839436 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:27 crc kubenswrapper[5034]: E0105 21:53:27.839475 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:27 crc kubenswrapper[5034]: E0105 21:53:27.839552 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:27 crc kubenswrapper[5034]: E0105 21:53:27.839862 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:27 crc kubenswrapper[5034]: E0105 21:53:27.840908 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:29 crc kubenswrapper[5034]: I0105 21:53:29.837427 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:29 crc kubenswrapper[5034]: I0105 21:53:29.837451 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:29 crc kubenswrapper[5034]: E0105 21:53:29.837584 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:29 crc kubenswrapper[5034]: I0105 21:53:29.837611 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:29 crc kubenswrapper[5034]: E0105 21:53:29.837892 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:29 crc kubenswrapper[5034]: E0105 21:53:29.837985 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:29 crc kubenswrapper[5034]: I0105 21:53:29.838253 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:29 crc kubenswrapper[5034]: E0105 21:53:29.838360 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:31 crc kubenswrapper[5034]: I0105 21:53:31.838255 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:31 crc kubenswrapper[5034]: I0105 21:53:31.838289 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:31 crc kubenswrapper[5034]: I0105 21:53:31.838355 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:31 crc kubenswrapper[5034]: I0105 21:53:31.838265 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:31 crc kubenswrapper[5034]: E0105 21:53:31.838422 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:31 crc kubenswrapper[5034]: E0105 21:53:31.838491 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:31 crc kubenswrapper[5034]: E0105 21:53:31.838643 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:31 crc kubenswrapper[5034]: E0105 21:53:31.838739 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:33 crc kubenswrapper[5034]: I0105 21:53:33.837914 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:33 crc kubenswrapper[5034]: I0105 21:53:33.837959 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:33 crc kubenswrapper[5034]: I0105 21:53:33.837978 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:33 crc kubenswrapper[5034]: I0105 21:53:33.837914 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:33 crc kubenswrapper[5034]: E0105 21:53:33.838045 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:33 crc kubenswrapper[5034]: E0105 21:53:33.838108 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:33 crc kubenswrapper[5034]: E0105 21:53:33.838157 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:33 crc kubenswrapper[5034]: E0105 21:53:33.838225 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:35 crc kubenswrapper[5034]: I0105 21:53:35.838317 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:35 crc kubenswrapper[5034]: I0105 21:53:35.838332 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:35 crc kubenswrapper[5034]: I0105 21:53:35.838328 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:35 crc kubenswrapper[5034]: I0105 21:53:35.838935 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:35 crc kubenswrapper[5034]: E0105 21:53:35.839050 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:35 crc kubenswrapper[5034]: E0105 21:53:35.839122 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:35 crc kubenswrapper[5034]: E0105 21:53:35.839200 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:35 crc kubenswrapper[5034]: E0105 21:53:35.839358 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:35 crc kubenswrapper[5034]: I0105 21:53:35.839540 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 21:53:35 crc kubenswrapper[5034]: E0105 21:53:35.839842 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fmfz_openshift-ovn-kubernetes(788e0f44-29c3-4c4a-afe9-33c26a965d74)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" Jan 05 21:53:37 crc kubenswrapper[5034]: I0105 21:53:37.838127 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:37 crc kubenswrapper[5034]: I0105 21:53:37.839001 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:37 crc kubenswrapper[5034]: I0105 21:53:37.839035 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:37 crc kubenswrapper[5034]: I0105 21:53:37.839044 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:37 crc kubenswrapper[5034]: E0105 21:53:37.839130 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:37 crc kubenswrapper[5034]: E0105 21:53:37.839165 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:37 crc kubenswrapper[5034]: E0105 21:53:37.839091 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:37 crc kubenswrapper[5034]: E0105 21:53:37.839208 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:39 crc kubenswrapper[5034]: I0105 21:53:39.837787 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:39 crc kubenswrapper[5034]: I0105 21:53:39.837787 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:39 crc kubenswrapper[5034]: I0105 21:53:39.837899 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:39 crc kubenswrapper[5034]: I0105 21:53:39.837930 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:39 crc kubenswrapper[5034]: E0105 21:53:39.838049 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:39 crc kubenswrapper[5034]: E0105 21:53:39.838176 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:39 crc kubenswrapper[5034]: E0105 21:53:39.838242 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:39 crc kubenswrapper[5034]: E0105 21:53:39.838297 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:40 crc kubenswrapper[5034]: I0105 21:53:40.744680 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tsch6_691cc76e-ed89-4547-9bb1-58b03c8f7932/kube-multus/1.log" Jan 05 21:53:40 crc kubenswrapper[5034]: I0105 21:53:40.745159 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tsch6_691cc76e-ed89-4547-9bb1-58b03c8f7932/kube-multus/0.log" Jan 05 21:53:40 crc kubenswrapper[5034]: I0105 21:53:40.745203 5034 generic.go:334] "Generic (PLEG): container finished" podID="691cc76e-ed89-4547-9bb1-58b03c8f7932" containerID="5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a" exitCode=1 Jan 05 21:53:40 crc kubenswrapper[5034]: I0105 21:53:40.745235 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tsch6" event={"ID":"691cc76e-ed89-4547-9bb1-58b03c8f7932","Type":"ContainerDied","Data":"5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a"} Jan 05 21:53:40 crc kubenswrapper[5034]: I0105 21:53:40.745271 5034 scope.go:117] "RemoveContainer" containerID="23d0424d3bbe1328846ada0db04db1d9698bfda669209ec914e97e6f2e62b740" Jan 05 21:53:40 crc kubenswrapper[5034]: I0105 21:53:40.745632 5034 scope.go:117] "RemoveContainer" containerID="5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a" Jan 05 21:53:40 crc kubenswrapper[5034]: E0105 21:53:40.745801 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-tsch6_openshift-multus(691cc76e-ed89-4547-9bb1-58b03c8f7932)\"" pod="openshift-multus/multus-tsch6" podUID="691cc76e-ed89-4547-9bb1-58b03c8f7932" Jan 05 21:53:41 crc kubenswrapper[5034]: I0105 21:53:41.749619 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tsch6_691cc76e-ed89-4547-9bb1-58b03c8f7932/kube-multus/1.log" Jan 05 21:53:41 crc kubenswrapper[5034]: I0105 21:53:41.838275 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:41 crc kubenswrapper[5034]: E0105 21:53:41.838614 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:41 crc kubenswrapper[5034]: I0105 21:53:41.838377 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:41 crc kubenswrapper[5034]: E0105 21:53:41.838852 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:41 crc kubenswrapper[5034]: I0105 21:53:41.838343 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:41 crc kubenswrapper[5034]: E0105 21:53:41.839097 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:41 crc kubenswrapper[5034]: I0105 21:53:41.838383 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:41 crc kubenswrapper[5034]: E0105 21:53:41.839332 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:43 crc kubenswrapper[5034]: I0105 21:53:43.837411 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:43 crc kubenswrapper[5034]: I0105 21:53:43.837625 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:43 crc kubenswrapper[5034]: I0105 21:53:43.837473 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:43 crc kubenswrapper[5034]: E0105 21:53:43.837702 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:43 crc kubenswrapper[5034]: I0105 21:53:43.837416 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:43 crc kubenswrapper[5034]: E0105 21:53:43.837881 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:43 crc kubenswrapper[5034]: E0105 21:53:43.837910 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:43 crc kubenswrapper[5034]: E0105 21:53:43.837974 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:45 crc kubenswrapper[5034]: I0105 21:53:45.837468 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:45 crc kubenswrapper[5034]: I0105 21:53:45.837502 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:45 crc kubenswrapper[5034]: I0105 21:53:45.837586 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:45 crc kubenswrapper[5034]: E0105 21:53:45.837692 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:45 crc kubenswrapper[5034]: I0105 21:53:45.837705 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:45 crc kubenswrapper[5034]: E0105 21:53:45.837799 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:45 crc kubenswrapper[5034]: E0105 21:53:45.837924 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:45 crc kubenswrapper[5034]: E0105 21:53:45.838001 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:47 crc kubenswrapper[5034]: I0105 21:53:47.837342 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:47 crc kubenswrapper[5034]: I0105 21:53:47.837428 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:47 crc kubenswrapper[5034]: E0105 21:53:47.839591 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:47 crc kubenswrapper[5034]: I0105 21:53:47.839861 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:47 crc kubenswrapper[5034]: I0105 21:53:47.839914 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:47 crc kubenswrapper[5034]: E0105 21:53:47.839998 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:47 crc kubenswrapper[5034]: E0105 21:53:47.840263 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:47 crc kubenswrapper[5034]: E0105 21:53:47.840526 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:47 crc kubenswrapper[5034]: E0105 21:53:47.880697 5034 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 05 21:53:47 crc kubenswrapper[5034]: E0105 21:53:47.975942 5034 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 21:53:48 crc kubenswrapper[5034]: I0105 21:53:48.838513 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 21:53:49 crc kubenswrapper[5034]: I0105 21:53:49.778623 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/3.log" Jan 05 21:53:49 crc kubenswrapper[5034]: I0105 21:53:49.781068 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerStarted","Data":"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380"} Jan 05 21:53:49 crc kubenswrapper[5034]: I0105 21:53:49.781878 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:53:49 crc kubenswrapper[5034]: I0105 21:53:49.807116 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podStartSLOduration=103.807099524 podStartE2EDuration="1m43.807099524s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:49.806902398 +0000 UTC m=+122.178901837" watchObservedRunningTime="2026-01-05 21:53:49.807099524 +0000 UTC m=+122.179098963" Jan 05 21:53:49 crc kubenswrapper[5034]: I0105 21:53:49.837955 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:49 crc kubenswrapper[5034]: E0105 21:53:49.838359 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:49 crc kubenswrapper[5034]: I0105 21:53:49.838003 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:49 crc kubenswrapper[5034]: E0105 21:53:49.838489 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:49 crc kubenswrapper[5034]: I0105 21:53:49.838028 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:49 crc kubenswrapper[5034]: E0105 21:53:49.838572 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:49 crc kubenswrapper[5034]: I0105 21:53:49.837982 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:49 crc kubenswrapper[5034]: E0105 21:53:49.838651 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:49 crc kubenswrapper[5034]: I0105 21:53:49.920991 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-99zr4"] Jan 05 21:53:50 crc kubenswrapper[5034]: I0105 21:53:50.783693 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:50 crc kubenswrapper[5034]: E0105 21:53:50.783806 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:51 crc kubenswrapper[5034]: I0105 21:53:51.837714 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:51 crc kubenswrapper[5034]: I0105 21:53:51.837786 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:51 crc kubenswrapper[5034]: E0105 21:53:51.837858 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:51 crc kubenswrapper[5034]: I0105 21:53:51.837805 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:51 crc kubenswrapper[5034]: E0105 21:53:51.837956 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:51 crc kubenswrapper[5034]: E0105 21:53:51.838019 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:52 crc kubenswrapper[5034]: I0105 21:53:52.838000 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:52 crc kubenswrapper[5034]: E0105 21:53:52.838185 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:52 crc kubenswrapper[5034]: I0105 21:53:52.838508 5034 scope.go:117] "RemoveContainer" containerID="5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a" Jan 05 21:53:52 crc kubenswrapper[5034]: E0105 21:53:52.977367 5034 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 21:53:53 crc kubenswrapper[5034]: I0105 21:53:53.794525 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tsch6_691cc76e-ed89-4547-9bb1-58b03c8f7932/kube-multus/1.log" Jan 05 21:53:53 crc kubenswrapper[5034]: I0105 21:53:53.794575 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tsch6" event={"ID":"691cc76e-ed89-4547-9bb1-58b03c8f7932","Type":"ContainerStarted","Data":"7d8d3280f5d4e9e2ad1d86c2f4531a86cb70ed40c439093604147b08ca3aae00"} Jan 05 21:53:53 crc kubenswrapper[5034]: I0105 21:53:53.837754 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:53 crc kubenswrapper[5034]: I0105 21:53:53.837833 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:53 crc kubenswrapper[5034]: I0105 21:53:53.837784 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:53 crc kubenswrapper[5034]: E0105 21:53:53.837959 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:53 crc kubenswrapper[5034]: E0105 21:53:53.838009 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:53 crc kubenswrapper[5034]: E0105 21:53:53.838110 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:54 crc kubenswrapper[5034]: I0105 21:53:54.838118 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:54 crc kubenswrapper[5034]: E0105 21:53:54.838264 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:55 crc kubenswrapper[5034]: I0105 21:53:55.837484 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:55 crc kubenswrapper[5034]: E0105 21:53:55.837977 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:55 crc kubenswrapper[5034]: I0105 21:53:55.838312 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:55 crc kubenswrapper[5034]: I0105 21:53:55.838358 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:55 crc kubenswrapper[5034]: E0105 21:53:55.838468 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:55 crc kubenswrapper[5034]: E0105 21:53:55.838572 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:56 crc kubenswrapper[5034]: I0105 21:53:56.838759 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:56 crc kubenswrapper[5034]: E0105 21:53:56.838893 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99zr4" podUID="7949c792-bd35-4fb3-9235-402a13c61026" Jan 05 21:53:57 crc kubenswrapper[5034]: I0105 21:53:57.837330 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:57 crc kubenswrapper[5034]: I0105 21:53:57.837383 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:57 crc kubenswrapper[5034]: I0105 21:53:57.837382 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:57 crc kubenswrapper[5034]: E0105 21:53:57.838296 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:53:57 crc kubenswrapper[5034]: E0105 21:53:57.838351 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:53:57 crc kubenswrapper[5034]: E0105 21:53:57.838421 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:53:58 crc kubenswrapper[5034]: I0105 21:53:58.838114 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:53:58 crc kubenswrapper[5034]: I0105 21:53:58.840211 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 05 21:53:58 crc kubenswrapper[5034]: I0105 21:53:58.840892 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 05 21:53:59 crc kubenswrapper[5034]: I0105 21:53:59.838292 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:53:59 crc kubenswrapper[5034]: I0105 21:53:59.838342 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:53:59 crc kubenswrapper[5034]: I0105 21:53:59.838433 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:53:59 crc kubenswrapper[5034]: I0105 21:53:59.841411 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 05 21:53:59 crc kubenswrapper[5034]: I0105 21:53:59.841899 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 05 21:53:59 crc kubenswrapper[5034]: I0105 21:53:59.841930 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 05 21:53:59 crc kubenswrapper[5034]: I0105 21:53:59.843405 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 05 21:54:01 crc kubenswrapper[5034]: I0105 21:54:01.106208 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.528794 5034 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.559636 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-94ljp"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.560348 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.561002 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.561549 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.563319 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.563833 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.564437 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.564682 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.564972 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.565426 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.565496 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x7rtg"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.565728 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.565824 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g2l2g"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.566046 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.566095 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vpvt5"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.566317 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.566641 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.566727 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.567234 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.567315 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.568286 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.568601 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdbd6"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.568802 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.568952 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.569378 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.569627 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.569866 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.570059 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.570527 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.572624 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.577799 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.584312 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.591741 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.591796 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.593774 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.594421 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.604251 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.604485 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.606290 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.608004 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.608589 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.609067 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.609419 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.609464 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.609501 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.610542 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.611162 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.611327 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.611989 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.612018 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.612158 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.612327 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.612379 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.612481 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.616467 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.616530 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.616599 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.616467 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.616723 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.616748 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.616782 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618454 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-client-ca\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618494 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6019a2f9-5524-4776-851a-e30c348536d0-node-pullsecrets\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618520 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579527a6-1737-40f2-8cfa-1798cc770142-config\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618556 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618611 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/03513125-c6f9-46c6-a4b6-87a82b869132-machine-approver-tls\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618653 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z84f2\" (UniqueName: \"kubernetes.io/projected/03513125-c6f9-46c6-a4b6-87a82b869132-kube-api-access-z84f2\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618714 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-config\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618742 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618779 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-dir\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618800 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9drfx\" (UniqueName: \"kubernetes.io/projected/ff056103-f552-4ce0-a4d1-83570b0ef42c-kube-api-access-9drfx\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618826 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljxkj\" (UniqueName: \"kubernetes.io/projected/16522025-6bf5-4451-85e6-2df92d8164c2-kube-api-access-ljxkj\") pod \"openshift-config-operator-7777fb866f-jcmjt\" (UID: \"16522025-6bf5-4451-85e6-2df92d8164c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618851 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618873 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6019a2f9-5524-4776-851a-e30c348536d0-encryption-config\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618893 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb25k\" (UniqueName: \"kubernetes.io/projected/c2510464-62e1-4d58-913e-35f87bed60d7-kube-api-access-cb25k\") pod \"openshift-apiserver-operator-796bbdcf4f-ghp77\" (UID: \"c2510464-62e1-4d58-913e-35f87bed60d7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618918 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618945 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618969 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/579527a6-1737-40f2-8cfa-1798cc770142-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.618991 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03513125-c6f9-46c6-a4b6-87a82b869132-auth-proxy-config\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619015 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-etcd-serving-ca\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619036 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619057 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619154 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619180 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff056103-f552-4ce0-a4d1-83570b0ef42c-config\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619198 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2510464-62e1-4d58-913e-35f87bed60d7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ghp77\" (UID: \"c2510464-62e1-4d58-913e-35f87bed60d7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619232 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619254 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtn6j\" (UniqueName: \"kubernetes.io/projected/6019a2f9-5524-4776-851a-e30c348536d0-kube-api-access-jtn6j\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619275 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff056103-f552-4ce0-a4d1-83570b0ef42c-serving-cert\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619293 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16522025-6bf5-4451-85e6-2df92d8164c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-jcmjt\" (UID: \"16522025-6bf5-4451-85e6-2df92d8164c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619318 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6019a2f9-5524-4776-851a-e30c348536d0-serving-cert\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619341 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-config\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619359 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2510464-62e1-4d58-913e-35f87bed60d7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ghp77\" (UID: \"c2510464-62e1-4d58-913e-35f87bed60d7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619382 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03513125-c6f9-46c6-a4b6-87a82b869132-config\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619403 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619422 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd4dl\" (UniqueName: \"kubernetes.io/projected/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-kube-api-access-dd4dl\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619443 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619462 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzp47\" (UniqueName: \"kubernetes.io/projected/3880fa85-26b0-4ed9-9b69-fe57b8c01092-kube-api-access-dzp47\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619484 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619507 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff056103-f552-4ce0-a4d1-83570b0ef42c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619526 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-image-import-ca\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619547 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-policies\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619579 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3880fa85-26b0-4ed9-9b69-fe57b8c01092-serving-cert\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619601 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-audit\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619623 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6019a2f9-5524-4776-851a-e30c348536d0-etcd-client\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619643 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/579527a6-1737-40f2-8cfa-1798cc770142-images\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619666 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff056103-f552-4ce0-a4d1-83570b0ef42c-service-ca-bundle\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619691 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6019a2f9-5524-4776-851a-e30c348536d0-audit-dir\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619712 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619742 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t25b2\" (UniqueName: \"kubernetes.io/projected/579527a6-1737-40f2-8cfa-1798cc770142-kube-api-access-t25b2\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.619813 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16522025-6bf5-4451-85e6-2df92d8164c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jcmjt\" (UID: \"16522025-6bf5-4451-85e6-2df92d8164c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.623698 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.623719 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.623738 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.623769 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.623852 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.623853 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.623899 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.624830 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.624871 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.624996 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.625143 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.627097 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.627182 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.627845 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.628495 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nstll"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.629092 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.640940 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.641470 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.642551 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.647639 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.664126 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.664503 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.664845 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.665167 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.666121 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.668965 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.669208 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.669660 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.670634 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.671298 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.671681 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.671743 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.671890 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.672021 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.672031 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.672473 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.674065 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.677974 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.679457 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.680013 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.681869 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8wssg"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.682450 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x7rtg"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.682548 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.683584 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.687665 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.688068 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.688287 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.688422 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.689139 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hz9rx"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.689906 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.690347 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.690547 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.690890 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.691264 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.691370 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.691574 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.692003 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.692735 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.694815 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.692682 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.692716 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.692763 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.695614 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.697101 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-94ljp"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.708285 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.708383 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.708859 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.709387 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f94p5"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.710389 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.710998 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.712114 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.712901 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.713935 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cthth"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.715202 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.715604 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.717625 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cthth" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.718620 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.720039 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.720367 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.721124 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-config\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.724285 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.724334 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9drfx\" (UniqueName: \"kubernetes.io/projected/ff056103-f552-4ce0-a4d1-83570b0ef42c-kube-api-access-9drfx\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.724387 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-dir\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.724421 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljxkj\" (UniqueName: \"kubernetes.io/projected/16522025-6bf5-4451-85e6-2df92d8164c2-kube-api-access-ljxkj\") pod \"openshift-config-operator-7777fb866f-jcmjt\" (UID: \"16522025-6bf5-4451-85e6-2df92d8164c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.724444 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb25k\" (UniqueName: \"kubernetes.io/projected/c2510464-62e1-4d58-913e-35f87bed60d7-kube-api-access-cb25k\") pod \"openshift-apiserver-operator-796bbdcf4f-ghp77\" (UID: \"c2510464-62e1-4d58-913e-35f87bed60d7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.721151 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.722309 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-config\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.724478 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.725532 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.726848 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6019a2f9-5524-4776-851a-e30c348536d0-encryption-config\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.726887 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.726929 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03513125-c6f9-46c6-a4b6-87a82b869132-auth-proxy-config\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.726960 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.726985 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/579527a6-1737-40f2-8cfa-1798cc770142-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727014 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-etcd-serving-ca\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727033 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727056 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727098 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727127 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff056103-f552-4ce0-a4d1-83570b0ef42c-config\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727150 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2510464-62e1-4d58-913e-35f87bed60d7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ghp77\" (UID: \"c2510464-62e1-4d58-913e-35f87bed60d7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727173 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtn6j\" (UniqueName: \"kubernetes.io/projected/6019a2f9-5524-4776-851a-e30c348536d0-kube-api-access-jtn6j\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727215 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727237 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff056103-f552-4ce0-a4d1-83570b0ef42c-serving-cert\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727261 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16522025-6bf5-4451-85e6-2df92d8164c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-jcmjt\" (UID: \"16522025-6bf5-4451-85e6-2df92d8164c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727283 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6019a2f9-5524-4776-851a-e30c348536d0-serving-cert\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727306 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-config\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727325 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2510464-62e1-4d58-913e-35f87bed60d7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ghp77\" (UID: \"c2510464-62e1-4d58-913e-35f87bed60d7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727354 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03513125-c6f9-46c6-a4b6-87a82b869132-config\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727378 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727402 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd4dl\" (UniqueName: \"kubernetes.io/projected/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-kube-api-access-dd4dl\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727428 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727446 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzp47\" (UniqueName: \"kubernetes.io/projected/3880fa85-26b0-4ed9-9b69-fe57b8c01092-kube-api-access-dzp47\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727467 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff056103-f552-4ce0-a4d1-83570b0ef42c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727490 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-image-import-ca\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727516 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727539 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-policies\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727579 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3880fa85-26b0-4ed9-9b69-fe57b8c01092-serving-cert\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727603 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-audit\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727624 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6019a2f9-5524-4776-851a-e30c348536d0-etcd-client\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727644 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/579527a6-1737-40f2-8cfa-1798cc770142-images\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727667 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff056103-f552-4ce0-a4d1-83570b0ef42c-service-ca-bundle\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727690 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6019a2f9-5524-4776-851a-e30c348536d0-audit-dir\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727709 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727728 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t25b2\" (UniqueName: \"kubernetes.io/projected/579527a6-1737-40f2-8cfa-1798cc770142-kube-api-access-t25b2\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727748 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16522025-6bf5-4451-85e6-2df92d8164c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jcmjt\" (UID: \"16522025-6bf5-4451-85e6-2df92d8164c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727778 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-client-ca\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727801 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6019a2f9-5524-4776-851a-e30c348536d0-node-pullsecrets\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727823 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579527a6-1737-40f2-8cfa-1798cc770142-config\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727853 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727874 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/03513125-c6f9-46c6-a4b6-87a82b869132-machine-approver-tls\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.727894 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z84f2\" (UniqueName: \"kubernetes.io/projected/03513125-c6f9-46c6-a4b6-87a82b869132-kube-api-access-z84f2\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.737176 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.752565 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-dir\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.752737 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kjmjm"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.753107 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.753608 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.753679 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.753732 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.754133 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.754688 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.754712 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.755743 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.755863 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.756103 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.756217 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.756338 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.756544 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.757166 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.759127 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.759436 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.759589 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff056103-f552-4ce0-a4d1-83570b0ef42c-config\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.760061 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.760492 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.762033 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.762344 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-lhk82"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.762945 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.763395 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.763575 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.763656 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.764344 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.765281 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.768851 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.768903 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.769062 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff056103-f552-4ce0-a4d1-83570b0ef42c-serving-cert\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.769103 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.770529 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2510464-62e1-4d58-913e-35f87bed60d7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ghp77\" (UID: \"c2510464-62e1-4d58-913e-35f87bed60d7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.770865 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03513125-c6f9-46c6-a4b6-87a82b869132-config\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.770989 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2510464-62e1-4d58-913e-35f87bed60d7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ghp77\" (UID: \"c2510464-62e1-4d58-913e-35f87bed60d7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.771061 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6019a2f9-5524-4776-851a-e30c348536d0-audit-dir\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.771819 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.771989 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-etcd-serving-ca\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.772968 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.773374 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16522025-6bf5-4451-85e6-2df92d8164c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jcmjt\" (UID: \"16522025-6bf5-4451-85e6-2df92d8164c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.773719 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-image-import-ca\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.774176 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-client-ca\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.774674 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6019a2f9-5524-4776-851a-e30c348536d0-serving-cert\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.775136 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579527a6-1737-40f2-8cfa-1798cc770142-config\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.775284 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-config\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.775426 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03513125-c6f9-46c6-a4b6-87a82b869132-auth-proxy-config\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.776056 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-policies\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.776845 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6019a2f9-5524-4776-851a-e30c348536d0-audit\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.777900 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.780665 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.779320 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/579527a6-1737-40f2-8cfa-1798cc770142-images\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.779427 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6019a2f9-5524-4776-851a-e30c348536d0-node-pullsecrets\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.779627 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6019a2f9-5524-4776-851a-e30c348536d0-encryption-config\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.780926 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.778285 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff056103-f552-4ce0-a4d1-83570b0ef42c-service-ca-bundle\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.782548 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff056103-f552-4ce0-a4d1-83570b0ef42c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.783452 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.783962 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.785040 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4fdnm"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.785351 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.785654 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16522025-6bf5-4451-85e6-2df92d8164c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-jcmjt\" (UID: \"16522025-6bf5-4451-85e6-2df92d8164c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.786120 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.786445 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.786486 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/579527a6-1737-40f2-8cfa-1798cc770142-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.787906 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.788254 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6019a2f9-5524-4776-851a-e30c348536d0-etcd-client\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.788323 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3880fa85-26b0-4ed9-9b69-fe57b8c01092-serving-cert\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.790200 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dd7xb"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.791022 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.791171 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.791831 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.791983 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-btdm7"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.792631 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.793550 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.795624 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdbd6"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.795734 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.798107 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xz2sp"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.798675 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.799013 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.799261 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xz2sp" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.799575 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.800623 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/03513125-c6f9-46c6-a4b6-87a82b869132-machine-approver-tls\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.800903 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rttcs"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.802622 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.802765 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g2l2g"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.804035 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.804111 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8wssg"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.805945 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.807033 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.808056 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.809167 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nstll"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.810111 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.813687 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.821403 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.823857 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.828678 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f94p5"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.832776 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.835065 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.836175 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.841191 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hz9rx"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.841217 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vpvt5"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.841228 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kjmjm"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.841564 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cthth"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.842235 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.842854 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.844023 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.845150 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.847495 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-47rdk"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.848590 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.848701 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dm8kk"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.849296 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dm8kk" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.849847 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.851195 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dd7xb"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.852294 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.853636 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.854976 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.856639 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4fdnm"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.858004 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-47rdk"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.859473 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-btdm7"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.861150 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.862313 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.862894 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.864266 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.865450 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xz2sp"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.866748 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.868045 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dm8kk"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.869519 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hv5s8"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.870256 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.870895 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hv5s8"] Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.883414 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.902445 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.924712 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.943257 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.962772 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 05 21:54:03 crc kubenswrapper[5034]: I0105 21:54:03.982875 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.009169 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.023184 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.042126 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.062624 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.082524 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.110678 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.123468 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.142987 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.163707 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.182796 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.203626 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.222617 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.243540 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.263121 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.281965 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.302688 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.322879 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.343755 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.377699 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z84f2\" (UniqueName: \"kubernetes.io/projected/03513125-c6f9-46c6-a4b6-87a82b869132-kube-api-access-z84f2\") pod \"machine-approver-56656f9798-4xwvv\" (UID: \"03513125-c6f9-46c6-a4b6-87a82b869132\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.395580 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9drfx\" (UniqueName: \"kubernetes.io/projected/ff056103-f552-4ce0-a4d1-83570b0ef42c-kube-api-access-9drfx\") pod \"authentication-operator-69f744f599-x7rtg\" (UID: \"ff056103-f552-4ce0-a4d1-83570b0ef42c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.415846 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljxkj\" (UniqueName: \"kubernetes.io/projected/16522025-6bf5-4451-85e6-2df92d8164c2-kube-api-access-ljxkj\") pod \"openshift-config-operator-7777fb866f-jcmjt\" (UID: \"16522025-6bf5-4451-85e6-2df92d8164c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.437270 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb25k\" (UniqueName: \"kubernetes.io/projected/c2510464-62e1-4d58-913e-35f87bed60d7-kube-api-access-cb25k\") pod \"openshift-apiserver-operator-796bbdcf4f-ghp77\" (UID: \"c2510464-62e1-4d58-913e-35f87bed60d7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.442621 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.463173 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.483353 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.502654 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.505070 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.522760 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.543481 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.548318 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.562934 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.583763 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.585494 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.599497 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.602618 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.622745 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.646875 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.718589 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtn6j\" (UniqueName: \"kubernetes.io/projected/6019a2f9-5524-4776-851a-e30c348536d0-kube-api-access-jtn6j\") pod \"apiserver-76f77b778f-vpvt5\" (UID: \"6019a2f9-5524-4776-851a-e30c348536d0\") " pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.723763 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745556 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726ec7f1-554d-46b9-83ff-bd08e7e8fb2a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k9zzr\" (UID: \"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745600 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-bound-sa-token\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745620 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4mzz\" (UniqueName: \"kubernetes.io/projected/9ff8a90e-11c2-4cf9-b8db-e5c90a552709-kube-api-access-f4mzz\") pod \"migrator-59844c95c7-2w4bs\" (UID: \"9ff8a90e-11c2-4cf9-b8db-e5c90a552709\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745658 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745676 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/726ec7f1-554d-46b9-83ff-bd08e7e8fb2a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k9zzr\" (UID: \"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745712 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-certificates\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745733 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/765481c2-452a-44d8-bcc9-ba2b9e653a8b-serving-cert\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745768 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745786 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745805 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055944f7-534d-4273-9960-3659d5751c2f-config\") pod \"kube-controller-manager-operator-78b949d7b-86f42\" (UID: \"055944f7-534d-4273-9960-3659d5751c2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745843 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/765481c2-452a-44d8-bcc9-ba2b9e653a8b-audit-policies\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745873 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/765481c2-452a-44d8-bcc9-ba2b9e653a8b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745888 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/765481c2-452a-44d8-bcc9-ba2b9e653a8b-audit-dir\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745902 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8lpr\" (UniqueName: \"kubernetes.io/projected/765481c2-452a-44d8-bcc9-ba2b9e653a8b-kube-api-access-h8lpr\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745924 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-trusted-ca\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745939 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/055944f7-534d-4273-9960-3659d5751c2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-86f42\" (UID: \"055944f7-534d-4273-9960-3659d5751c2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.745982 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/765481c2-452a-44d8-bcc9-ba2b9e653a8b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746003 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hm2l\" (UniqueName: \"kubernetes.io/projected/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-kube-api-access-8hm2l\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746035 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055944f7-534d-4273-9960-3659d5751c2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-86f42\" (UID: \"055944f7-534d-4273-9960-3659d5751c2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746124 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726ec7f1-554d-46b9-83ff-bd08e7e8fb2a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k9zzr\" (UID: \"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746149 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvn5\" (UniqueName: \"kubernetes.io/projected/285edf94-ce4e-4226-bf7f-aff67d967a6e-kube-api-access-nrvn5\") pod \"cluster-samples-operator-665b6dd947-l6glc\" (UID: \"285edf94-ce4e-4226-bf7f-aff67d967a6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746164 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-images\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746179 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/285edf94-ce4e-4226-bf7f-aff67d967a6e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l6glc\" (UID: \"285edf94-ce4e-4226-bf7f-aff67d967a6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746193 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qll8h\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-kube-api-access-qll8h\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746242 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-tls\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746257 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/765481c2-452a-44d8-bcc9-ba2b9e653a8b-encryption-config\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746287 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-proxy-tls\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746341 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/765481c2-452a-44d8-bcc9-ba2b9e653a8b-etcd-client\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.746387 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.747057 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: E0105 21:54:04.747268 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.247249467 +0000 UTC m=+137.619249006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.763016 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.781662 5034 request.go:700] Waited for 1.014843396s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0 Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.783175 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.803436 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.819610 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.823253 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.828057 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" event={"ID":"03513125-c6f9-46c6-a4b6-87a82b869132","Type":"ContainerStarted","Data":"e460162993e79eb03388460c5551e2e3409291b7e3b23d034b88207a3ead5d0b"} Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.843850 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847196 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:04 crc kubenswrapper[5034]: E0105 21:54:04.847344 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.347328817 +0000 UTC m=+137.719328246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847370 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/145eff0c-dcb1-47ac-be8b-10d06f7dd204-tmpfs\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847399 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9afea6f6-4124-424d-a820-33b15ec35121-metrics-tls\") pod \"dns-operator-744455d44c-cthth\" (UID: \"9afea6f6-4124-424d-a820-33b15ec35121\") " pod="openshift-dns-operator/dns-operator-744455d44c-cthth" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847420 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726ec7f1-554d-46b9-83ff-bd08e7e8fb2a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k9zzr\" (UID: \"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847436 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-plugins-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847451 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21e89bb2-84f3-407b-966b-b1774d96da98-secret-volume\") pod \"collect-profiles-29460825-ck6dh\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847472 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6xh\" (UniqueName: \"kubernetes.io/projected/9afea6f6-4124-424d-a820-33b15ec35121-kube-api-access-lj6xh\") pod \"dns-operator-744455d44c-cthth\" (UID: \"9afea6f6-4124-424d-a820-33b15ec35121\") " pod="openshift-dns-operator/dns-operator-744455d44c-cthth" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847503 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-certificates\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847520 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/563a62ee-1dc1-4dfe-a33c-eb671f426a37-default-certificate\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847535 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c59995-27b2-4192-826d-f22e742dae38-serving-cert\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847557 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847572 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055944f7-534d-4273-9960-3659d5751c2f-config\") pod \"kube-controller-manager-operator-78b949d7b-86f42\" (UID: \"055944f7-534d-4273-9960-3659d5751c2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847587 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05e846f8-1ebe-4926-8e94-784b94c246c6-metrics-tls\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847604 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87c59995-27b2-4192-826d-f22e742dae38-trusted-ca\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847620 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w5nc\" (UniqueName: \"kubernetes.io/projected/87c59995-27b2-4192-826d-f22e742dae38-kube-api-access-7w5nc\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847635 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c3c207-7bf4-414a-893f-642946b0fe8f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8lxbm\" (UID: \"07c3c207-7bf4-414a-893f-642946b0fe8f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847649 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05e846f8-1ebe-4926-8e94-784b94c246c6-trusted-ca\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847666 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/765481c2-452a-44d8-bcc9-ba2b9e653a8b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847682 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8lpr\" (UniqueName: \"kubernetes.io/projected/765481c2-452a-44d8-bcc9-ba2b9e653a8b-kube-api-access-h8lpr\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847696 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/055944f7-534d-4273-9960-3659d5751c2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-86f42\" (UID: \"055944f7-534d-4273-9960-3659d5751c2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847714 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rm2g\" (UniqueName: \"kubernetes.io/projected/d23b0bf5-8bd5-4891-b101-a278b984dbcf-kube-api-access-9rm2g\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847738 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dd7xb\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847765 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/765481c2-452a-44d8-bcc9-ba2b9e653a8b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847792 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-trusted-ca-bundle\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847812 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glzsf\" (UniqueName: \"kubernetes.io/projected/143b2828-1125-4598-8d3a-44fdc8023b73-kube-api-access-glzsf\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847831 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2772457b-561e-4348-b816-e9d472c4678d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847848 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hm2l\" (UniqueName: \"kubernetes.io/projected/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-kube-api-access-8hm2l\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847863 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c9121e17-1727-4f92-8aa3-e636d63fc1da-srv-cert\") pod \"olm-operator-6b444d44fb-z4stt\" (UID: \"c9121e17-1727-4f92-8aa3-e636d63fc1da\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847880 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctzpv\" (UniqueName: \"kubernetes.io/projected/1021ddb8-2b37-4da6-b560-17d91af60308-kube-api-access-ctzpv\") pod \"package-server-manager-789f6589d5-wfd9f\" (UID: \"1021ddb8-2b37-4da6-b560-17d91af60308\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847896 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c9121e17-1727-4f92-8aa3-e636d63fc1da-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z4stt\" (UID: \"c9121e17-1727-4f92-8aa3-e636d63fc1da\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847913 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd18831-c5ce-4983-9c86-4ac4a560e0a6-config\") pod \"service-ca-operator-777779d784-btdm7\" (UID: \"7dd18831-c5ce-4983-9c86-4ac4a560e0a6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847929 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-config\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847944 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6hnr\" (UniqueName: \"kubernetes.io/projected/569d48b9-e7f0-4e8f-b0e5-9376475359c4-kube-api-access-p6hnr\") pod \"catalog-operator-68c6474976-v6d74\" (UID: \"569d48b9-e7f0-4e8f-b0e5-9376475359c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847960 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055944f7-534d-4273-9960-3659d5751c2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-86f42\" (UID: \"055944f7-534d-4273-9960-3659d5751c2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847986 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726ec7f1-554d-46b9-83ff-bd08e7e8fb2a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k9zzr\" (UID: \"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.847992 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563a62ee-1dc1-4dfe-a33c-eb671f426a37-service-ca-bundle\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848033 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c59995-27b2-4192-826d-f22e742dae38-config\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848050 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726ec7f1-554d-46b9-83ff-bd08e7e8fb2a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k9zzr\" (UID: \"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848066 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b0d8c6f-30f2-4549-a63b-4c332145ea4e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-72g7q\" (UID: \"5b0d8c6f-30f2-4549-a63b-4c332145ea4e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848099 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/285edf94-ce4e-4226-bf7f-aff67d967a6e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l6glc\" (UID: \"285edf94-ce4e-4226-bf7f-aff67d967a6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848115 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrvn5\" (UniqueName: \"kubernetes.io/projected/285edf94-ce4e-4226-bf7f-aff67d967a6e-kube-api-access-nrvn5\") pod \"cluster-samples-operator-665b6dd947-l6glc\" (UID: \"285edf94-ce4e-4226-bf7f-aff67d967a6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848130 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-images\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848145 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qll8h\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-kube-api-access-qll8h\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848159 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b5e6e22-944f-4219-88bc-ab40a4fe37a9-cert\") pod \"ingress-canary-dm8kk\" (UID: \"9b5e6e22-944f-4219-88bc-ab40a4fe37a9\") " pod="openshift-ingress-canary/ingress-canary-dm8kk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848174 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56dee690-ebb9-4ad9-a51e-ba1f77597d94-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-77hq4\" (UID: \"56dee690-ebb9-4ad9-a51e-ba1f77597d94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848195 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9cnt\" (UniqueName: \"kubernetes.io/projected/5100f0a0-b59f-49ba-8c1f-647dfba1c314-kube-api-access-z9cnt\") pod \"machine-config-server-rttcs\" (UID: \"5100f0a0-b59f-49ba-8c1f-647dfba1c314\") " pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848224 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-csi-data-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848242 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7kc4\" (UniqueName: \"kubernetes.io/projected/c9121e17-1727-4f92-8aa3-e636d63fc1da-kube-api-access-g7kc4\") pod \"olm-operator-6b444d44fb-z4stt\" (UID: \"c9121e17-1727-4f92-8aa3-e636d63fc1da\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848261 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5100f0a0-b59f-49ba-8c1f-647dfba1c314-node-bootstrap-token\") pod \"machine-config-server-rttcs\" (UID: \"5100f0a0-b59f-49ba-8c1f-647dfba1c314\") " pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848278 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/145eff0c-dcb1-47ac-be8b-10d06f7dd204-apiservice-cert\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848296 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-mountpoint-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848352 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-config\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848401 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c3c207-7bf4-414a-893f-642946b0fe8f-config\") pod \"kube-apiserver-operator-766d6c64bb-8lxbm\" (UID: \"07c3c207-7bf4-414a-893f-642946b0fe8f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848430 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7kq\" (UniqueName: \"kubernetes.io/projected/05e846f8-1ebe-4926-8e94-784b94c246c6-kube-api-access-4b7kq\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848478 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21e89bb2-84f3-407b-966b-b1774d96da98-config-volume\") pod \"collect-profiles-29460825-ck6dh\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848709 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dd7xb\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848736 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/765481c2-452a-44d8-bcc9-ba2b9e653a8b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848739 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9-proxy-tls\") pod \"machine-config-controller-84d6567774-6nv6f\" (UID: \"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848774 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6nv6f\" (UID: \"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848795 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848815 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1021ddb8-2b37-4da6-b560-17d91af60308-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wfd9f\" (UID: \"1021ddb8-2b37-4da6-b560-17d91af60308\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848834 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48gq4\" (UniqueName: \"kubernetes.io/projected/563a62ee-1dc1-4dfe-a33c-eb671f426a37-kube-api-access-48gq4\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848853 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/648556f6-8682-4cf8-beaa-bdf944bb7f14-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j75n2\" (UID: \"648556f6-8682-4cf8-beaa-bdf944bb7f14\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848912 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtkb7\" (UniqueName: \"kubernetes.io/projected/9b5e6e22-944f-4219-88bc-ab40a4fe37a9-kube-api-access-gtkb7\") pod \"ingress-canary-dm8kk\" (UID: \"9b5e6e22-944f-4219-88bc-ab40a4fe37a9\") " pod="openshift-ingress-canary/ingress-canary-dm8kk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848940 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9db8506-ddd2-448c-b9ce-1d16b1cae0d6-config-volume\") pod \"dns-default-hv5s8\" (UID: \"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6\") " pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848966 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/145eff0c-dcb1-47ac-be8b-10d06f7dd204-webhook-cert\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.848990 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bljkq\" (UniqueName: \"kubernetes.io/projected/1de23081-5abe-4824-b62b-17a083c43073-kube-api-access-bljkq\") pod \"multus-admission-controller-857f4d67dd-hz9rx\" (UID: \"1de23081-5abe-4824-b62b-17a083c43073\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849020 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-bound-sa-token\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849041 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4mzz\" (UniqueName: \"kubernetes.io/projected/9ff8a90e-11c2-4cf9-b8db-e5c90a552709-kube-api-access-f4mzz\") pod \"migrator-59844c95c7-2w4bs\" (UID: \"9ff8a90e-11c2-4cf9-b8db-e5c90a552709\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849069 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-etcd-client\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849132 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849205 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849238 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/726ec7f1-554d-46b9-83ff-bd08e7e8fb2a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k9zzr\" (UID: \"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849249 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849265 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-etcd-ca\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849291 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56dee690-ebb9-4ad9-a51e-ba1f77597d94-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-77hq4\" (UID: \"56dee690-ebb9-4ad9-a51e-ba1f77597d94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849415 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/765481c2-452a-44d8-bcc9-ba2b9e653a8b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: E0105 21:54:04.849500 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.349481358 +0000 UTC m=+137.721480847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849504 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055944f7-534d-4273-9960-3659d5751c2f-config\") pod \"kube-controller-manager-operator-78b949d7b-86f42\" (UID: \"055944f7-534d-4273-9960-3659d5751c2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849549 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-serving-cert\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849587 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/765481c2-452a-44d8-bcc9-ba2b9e653a8b-serving-cert\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849619 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c3c207-7bf4-414a-893f-642946b0fe8f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8lxbm\" (UID: \"07c3c207-7bf4-414a-893f-642946b0fe8f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849652 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849704 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8rkb\" (UniqueName: \"kubernetes.io/projected/8c098c94-1752-4ea9-a292-e650d2b73ab6-kube-api-access-c8rkb\") pod \"service-ca-9c57cc56f-4fdnm\" (UID: \"8c098c94-1752-4ea9-a292-e650d2b73ab6\") " pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849732 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z6bj\" (UniqueName: \"kubernetes.io/projected/648556f6-8682-4cf8-beaa-bdf944bb7f14-kube-api-access-4z6bj\") pod \"control-plane-machine-set-operator-78cbb6b69f-j75n2\" (UID: \"648556f6-8682-4cf8-beaa-bdf944bb7f14\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849778 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0d8c6f-30f2-4549-a63b-4c332145ea4e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-72g7q\" (UID: \"5b0d8c6f-30f2-4549-a63b-4c332145ea4e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849830 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-service-ca\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849851 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-662fh\" (UniqueName: \"kubernetes.io/projected/56dee690-ebb9-4ad9-a51e-ba1f77597d94-kube-api-access-662fh\") pod \"kube-storage-version-migrator-operator-b67b599dd-77hq4\" (UID: \"56dee690-ebb9-4ad9-a51e-ba1f77597d94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.849950 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/765481c2-452a-44d8-bcc9-ba2b9e653a8b-audit-policies\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850141 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xngvc\" (UniqueName: \"kubernetes.io/projected/2772457b-561e-4348-b816-e9d472c4678d-kube-api-access-xngvc\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850224 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/765481c2-452a-44d8-bcc9-ba2b9e653a8b-audit-dir\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850271 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dd18831-c5ce-4983-9c86-4ac4a560e0a6-serving-cert\") pod \"service-ca-operator-777779d784-btdm7\" (UID: \"7dd18831-c5ce-4983-9c86-4ac4a560e0a6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850293 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-oauth-serving-cert\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850320 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-socket-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850339 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-trusted-ca\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850382 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9db8506-ddd2-448c-b9ce-1d16b1cae0d6-metrics-tls\") pod \"dns-default-hv5s8\" (UID: \"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6\") " pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850386 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/765481c2-452a-44d8-bcc9-ba2b9e653a8b-audit-policies\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850398 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/569d48b9-e7f0-4e8f-b0e5-9376475359c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-v6d74\" (UID: \"569d48b9-e7f0-4e8f-b0e5-9376475359c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850419 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbqvb\" (UniqueName: \"kubernetes.io/projected/5b0d8c6f-30f2-4549-a63b-4c332145ea4e-kube-api-access-fbqvb\") pod \"openshift-controller-manager-operator-756b6f6bc6-72g7q\" (UID: \"5b0d8c6f-30f2-4549-a63b-4c332145ea4e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850526 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24kz\" (UniqueName: \"kubernetes.io/projected/af1871ae-05fe-4597-8bb9-e2525f739922-kube-api-access-v24kz\") pod \"marketplace-operator-79b997595-dd7xb\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850571 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/765481c2-452a-44d8-bcc9-ba2b9e653a8b-audit-dir\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850577 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/563a62ee-1dc1-4dfe-a33c-eb671f426a37-stats-auth\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850636 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05e846f8-1ebe-4926-8e94-784b94c246c6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850680 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-client-ca\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850734 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de23081-5abe-4824-b62b-17a083c43073-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hz9rx\" (UID: \"1de23081-5abe-4824-b62b-17a083c43073\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850769 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gkq\" (UniqueName: \"kubernetes.io/projected/4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9-kube-api-access-66gkq\") pod \"machine-config-controller-84d6567774-6nv6f\" (UID: \"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850849 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gj2x\" (UniqueName: \"kubernetes.io/projected/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-kube-api-access-8gj2x\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850877 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-config\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850926 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npzfs\" (UniqueName: \"kubernetes.io/projected/7dd18831-c5ce-4983-9c86-4ac4a560e0a6-kube-api-access-npzfs\") pod \"service-ca-operator-777779d784-btdm7\" (UID: \"7dd18831-c5ce-4983-9c86-4ac4a560e0a6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.850949 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2772457b-561e-4348-b816-e9d472c4678d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851054 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/563a62ee-1dc1-4dfe-a33c-eb671f426a37-metrics-certs\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851120 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg25d\" (UniqueName: \"kubernetes.io/projected/f4747c26-8a6b-4d60-ae91-36f9d7b86f14-kube-api-access-jg25d\") pod \"downloads-7954f5f757-xz2sp\" (UID: \"f4747c26-8a6b-4d60-ae91-36f9d7b86f14\") " pod="openshift-console/downloads-7954f5f757-xz2sp" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851181 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2772457b-561e-4348-b816-e9d472c4678d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851214 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-registration-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851238 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5100f0a0-b59f-49ba-8c1f-647dfba1c314-certs\") pod \"machine-config-server-rttcs\" (UID: \"5100f0a0-b59f-49ba-8c1f-647dfba1c314\") " pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851264 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/569d48b9-e7f0-4e8f-b0e5-9376475359c4-srv-cert\") pod \"catalog-operator-68c6474976-v6d74\" (UID: \"569d48b9-e7f0-4e8f-b0e5-9376475359c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851302 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c098c94-1752-4ea9-a292-e650d2b73ab6-signing-cabundle\") pod \"service-ca-9c57cc56f-4fdnm\" (UID: \"8c098c94-1752-4ea9-a292-e650d2b73ab6\") " pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851328 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-serving-cert\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851349 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143b2828-1125-4598-8d3a-44fdc8023b73-serving-cert\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851376 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2km\" (UniqueName: \"kubernetes.io/projected/145eff0c-dcb1-47ac-be8b-10d06f7dd204-kube-api-access-zb2km\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851403 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055944f7-534d-4273-9960-3659d5751c2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-86f42\" (UID: \"055944f7-534d-4273-9960-3659d5751c2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851421 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-tls\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851446 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/765481c2-452a-44d8-bcc9-ba2b9e653a8b-encryption-config\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851468 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-proxy-tls\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851490 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-oauth-config\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851515 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-etcd-service-ca\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851534 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jb5\" (UniqueName: \"kubernetes.io/projected/c9db8506-ddd2-448c-b9ce-1d16b1cae0d6-kube-api-access-42jb5\") pod \"dns-default-hv5s8\" (UID: \"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6\") " pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851556 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xfk2\" (UniqueName: \"kubernetes.io/projected/9f9df170-8b91-4370-8dce-46e91312904c-kube-api-access-2xfk2\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851581 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/765481c2-452a-44d8-bcc9-ba2b9e653a8b-etcd-client\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851606 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c098c94-1752-4ea9-a292-e650d2b73ab6-signing-key\") pod \"service-ca-9c57cc56f-4fdnm\" (UID: \"8c098c94-1752-4ea9-a292-e650d2b73ab6\") " pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.851625 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gff2t\" (UniqueName: \"kubernetes.io/projected/21e89bb2-84f3-407b-966b-b1774d96da98-kube-api-access-gff2t\") pod \"collect-profiles-29460825-ck6dh\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.852966 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.854086 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/765481c2-452a-44d8-bcc9-ba2b9e653a8b-encryption-config\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.854529 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-images\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.854586 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-proxy-tls\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.854788 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/765481c2-452a-44d8-bcc9-ba2b9e653a8b-etcd-client\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.855100 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/765481c2-452a-44d8-bcc9-ba2b9e653a8b-serving-cert\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.855440 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-tls\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.856198 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/726ec7f1-554d-46b9-83ff-bd08e7e8fb2a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k9zzr\" (UID: \"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.858779 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/285edf94-ce4e-4226-bf7f-aff67d967a6e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l6glc\" (UID: \"285edf94-ce4e-4226-bf7f-aff67d967a6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.866332 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.899510 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd4dl\" (UniqueName: \"kubernetes.io/projected/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-kube-api-access-dd4dl\") pod \"oauth-openshift-558db77b4-g2l2g\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.920000 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzp47\" (UniqueName: \"kubernetes.io/projected/3880fa85-26b0-4ed9-9b69-fe57b8c01092-kube-api-access-dzp47\") pod \"controller-manager-879f6c89f-xdbd6\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.937205 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t25b2\" (UniqueName: \"kubernetes.io/projected/579527a6-1737-40f2-8cfa-1798cc770142-kube-api-access-t25b2\") pod \"machine-api-operator-5694c8668f-94ljp\" (UID: \"579527a6-1737-40f2-8cfa-1798cc770142\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.942945 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.954311 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.954535 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de23081-5abe-4824-b62b-17a083c43073-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hz9rx\" (UID: \"1de23081-5abe-4824-b62b-17a083c43073\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.954568 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66gkq\" (UniqueName: \"kubernetes.io/projected/4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9-kube-api-access-66gkq\") pod \"machine-config-controller-84d6567774-6nv6f\" (UID: \"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.954595 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gj2x\" (UniqueName: \"kubernetes.io/projected/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-kube-api-access-8gj2x\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: E0105 21:54:04.954649 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.454601599 +0000 UTC m=+137.826601048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.954736 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-config\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.954807 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npzfs\" (UniqueName: \"kubernetes.io/projected/7dd18831-c5ce-4983-9c86-4ac4a560e0a6-kube-api-access-npzfs\") pod \"service-ca-operator-777779d784-btdm7\" (UID: \"7dd18831-c5ce-4983-9c86-4ac4a560e0a6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.954843 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2772457b-561e-4348-b816-e9d472c4678d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.954873 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/563a62ee-1dc1-4dfe-a33c-eb671f426a37-metrics-certs\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.954932 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg25d\" (UniqueName: \"kubernetes.io/projected/f4747c26-8a6b-4d60-ae91-36f9d7b86f14-kube-api-access-jg25d\") pod \"downloads-7954f5f757-xz2sp\" (UID: \"f4747c26-8a6b-4d60-ae91-36f9d7b86f14\") " pod="openshift-console/downloads-7954f5f757-xz2sp" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.954982 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2772457b-561e-4348-b816-e9d472c4678d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955018 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-registration-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955048 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5100f0a0-b59f-49ba-8c1f-647dfba1c314-certs\") pod \"machine-config-server-rttcs\" (UID: \"5100f0a0-b59f-49ba-8c1f-647dfba1c314\") " pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955137 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/569d48b9-e7f0-4e8f-b0e5-9376475359c4-srv-cert\") pod \"catalog-operator-68c6474976-v6d74\" (UID: \"569d48b9-e7f0-4e8f-b0e5-9376475359c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955184 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c098c94-1752-4ea9-a292-e650d2b73ab6-signing-cabundle\") pod \"service-ca-9c57cc56f-4fdnm\" (UID: \"8c098c94-1752-4ea9-a292-e650d2b73ab6\") " pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955219 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-serving-cert\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955249 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143b2828-1125-4598-8d3a-44fdc8023b73-serving-cert\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955283 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb2km\" (UniqueName: \"kubernetes.io/projected/145eff0c-dcb1-47ac-be8b-10d06f7dd204-kube-api-access-zb2km\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955324 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-oauth-config\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955352 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-registration-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955363 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-etcd-service-ca\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955444 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42jb5\" (UniqueName: \"kubernetes.io/projected/c9db8506-ddd2-448c-b9ce-1d16b1cae0d6-kube-api-access-42jb5\") pod \"dns-default-hv5s8\" (UID: \"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6\") " pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955470 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xfk2\" (UniqueName: \"kubernetes.io/projected/9f9df170-8b91-4370-8dce-46e91312904c-kube-api-access-2xfk2\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955501 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c098c94-1752-4ea9-a292-e650d2b73ab6-signing-key\") pod \"service-ca-9c57cc56f-4fdnm\" (UID: \"8c098c94-1752-4ea9-a292-e650d2b73ab6\") " pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955523 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gff2t\" (UniqueName: \"kubernetes.io/projected/21e89bb2-84f3-407b-966b-b1774d96da98-kube-api-access-gff2t\") pod \"collect-profiles-29460825-ck6dh\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955548 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/145eff0c-dcb1-47ac-be8b-10d06f7dd204-tmpfs\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955588 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9afea6f6-4124-424d-a820-33b15ec35121-metrics-tls\") pod \"dns-operator-744455d44c-cthth\" (UID: \"9afea6f6-4124-424d-a820-33b15ec35121\") " pod="openshift-dns-operator/dns-operator-744455d44c-cthth" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955613 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-plugins-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955635 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21e89bb2-84f3-407b-966b-b1774d96da98-secret-volume\") pod \"collect-profiles-29460825-ck6dh\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955664 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6xh\" (UniqueName: \"kubernetes.io/projected/9afea6f6-4124-424d-a820-33b15ec35121-kube-api-access-lj6xh\") pod \"dns-operator-744455d44c-cthth\" (UID: \"9afea6f6-4124-424d-a820-33b15ec35121\") " pod="openshift-dns-operator/dns-operator-744455d44c-cthth" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955706 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/563a62ee-1dc1-4dfe-a33c-eb671f426a37-default-certificate\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955733 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c59995-27b2-4192-826d-f22e742dae38-serving-cert\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955754 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05e846f8-1ebe-4926-8e94-784b94c246c6-metrics-tls\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955770 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87c59995-27b2-4192-826d-f22e742dae38-trusted-ca\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955784 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-config\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955787 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w5nc\" (UniqueName: \"kubernetes.io/projected/87c59995-27b2-4192-826d-f22e742dae38-kube-api-access-7w5nc\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955839 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c3c207-7bf4-414a-893f-642946b0fe8f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8lxbm\" (UID: \"07c3c207-7bf4-414a-893f-642946b0fe8f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955859 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05e846f8-1ebe-4926-8e94-784b94c246c6-trusted-ca\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955893 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rm2g\" (UniqueName: \"kubernetes.io/projected/d23b0bf5-8bd5-4891-b101-a278b984dbcf-kube-api-access-9rm2g\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955921 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dd7xb\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955941 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-trusted-ca-bundle\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955965 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glzsf\" (UniqueName: \"kubernetes.io/projected/143b2828-1125-4598-8d3a-44fdc8023b73-kube-api-access-glzsf\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.955982 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2772457b-561e-4348-b816-e9d472c4678d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956007 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c9121e17-1727-4f92-8aa3-e636d63fc1da-srv-cert\") pod \"olm-operator-6b444d44fb-z4stt\" (UID: \"c9121e17-1727-4f92-8aa3-e636d63fc1da\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956025 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctzpv\" (UniqueName: \"kubernetes.io/projected/1021ddb8-2b37-4da6-b560-17d91af60308-kube-api-access-ctzpv\") pod \"package-server-manager-789f6589d5-wfd9f\" (UID: \"1021ddb8-2b37-4da6-b560-17d91af60308\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956043 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c9121e17-1727-4f92-8aa3-e636d63fc1da-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z4stt\" (UID: \"c9121e17-1727-4f92-8aa3-e636d63fc1da\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956066 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd18831-c5ce-4983-9c86-4ac4a560e0a6-config\") pod \"service-ca-operator-777779d784-btdm7\" (UID: \"7dd18831-c5ce-4983-9c86-4ac4a560e0a6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956107 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-config\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956131 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6hnr\" (UniqueName: \"kubernetes.io/projected/569d48b9-e7f0-4e8f-b0e5-9376475359c4-kube-api-access-p6hnr\") pod \"catalog-operator-68c6474976-v6d74\" (UID: \"569d48b9-e7f0-4e8f-b0e5-9376475359c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956166 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563a62ee-1dc1-4dfe-a33c-eb671f426a37-service-ca-bundle\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956186 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c59995-27b2-4192-826d-f22e742dae38-config\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956215 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b0d8c6f-30f2-4549-a63b-4c332145ea4e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-72g7q\" (UID: \"5b0d8c6f-30f2-4549-a63b-4c332145ea4e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956246 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b5e6e22-944f-4219-88bc-ab40a4fe37a9-cert\") pod \"ingress-canary-dm8kk\" (UID: \"9b5e6e22-944f-4219-88bc-ab40a4fe37a9\") " pod="openshift-ingress-canary/ingress-canary-dm8kk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956263 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56dee690-ebb9-4ad9-a51e-ba1f77597d94-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-77hq4\" (UID: \"56dee690-ebb9-4ad9-a51e-ba1f77597d94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956309 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9cnt\" (UniqueName: \"kubernetes.io/projected/5100f0a0-b59f-49ba-8c1f-647dfba1c314-kube-api-access-z9cnt\") pod \"machine-config-server-rttcs\" (UID: \"5100f0a0-b59f-49ba-8c1f-647dfba1c314\") " pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956342 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-csi-data-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956343 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/145eff0c-dcb1-47ac-be8b-10d06f7dd204-tmpfs\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956357 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7kc4\" (UniqueName: \"kubernetes.io/projected/c9121e17-1727-4f92-8aa3-e636d63fc1da-kube-api-access-g7kc4\") pod \"olm-operator-6b444d44fb-z4stt\" (UID: \"c9121e17-1727-4f92-8aa3-e636d63fc1da\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956398 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5100f0a0-b59f-49ba-8c1f-647dfba1c314-node-bootstrap-token\") pod \"machine-config-server-rttcs\" (UID: \"5100f0a0-b59f-49ba-8c1f-647dfba1c314\") " pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956426 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/145eff0c-dcb1-47ac-be8b-10d06f7dd204-apiservice-cert\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956451 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-mountpoint-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956481 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-config\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956503 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c3c207-7bf4-414a-893f-642946b0fe8f-config\") pod \"kube-apiserver-operator-766d6c64bb-8lxbm\" (UID: \"07c3c207-7bf4-414a-893f-642946b0fe8f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956528 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b7kq\" (UniqueName: \"kubernetes.io/projected/05e846f8-1ebe-4926-8e94-784b94c246c6-kube-api-access-4b7kq\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956551 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21e89bb2-84f3-407b-966b-b1774d96da98-config-volume\") pod \"collect-profiles-29460825-ck6dh\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956574 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dd7xb\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956597 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9-proxy-tls\") pod \"machine-config-controller-84d6567774-6nv6f\" (UID: \"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956621 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6nv6f\" (UID: \"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956652 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1021ddb8-2b37-4da6-b560-17d91af60308-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wfd9f\" (UID: \"1021ddb8-2b37-4da6-b560-17d91af60308\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956684 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48gq4\" (UniqueName: \"kubernetes.io/projected/563a62ee-1dc1-4dfe-a33c-eb671f426a37-kube-api-access-48gq4\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956711 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/648556f6-8682-4cf8-beaa-bdf944bb7f14-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j75n2\" (UID: \"648556f6-8682-4cf8-beaa-bdf944bb7f14\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956743 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtkb7\" (UniqueName: \"kubernetes.io/projected/9b5e6e22-944f-4219-88bc-ab40a4fe37a9-kube-api-access-gtkb7\") pod \"ingress-canary-dm8kk\" (UID: \"9b5e6e22-944f-4219-88bc-ab40a4fe37a9\") " pod="openshift-ingress-canary/ingress-canary-dm8kk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956765 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9db8506-ddd2-448c-b9ce-1d16b1cae0d6-config-volume\") pod \"dns-default-hv5s8\" (UID: \"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6\") " pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956788 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/145eff0c-dcb1-47ac-be8b-10d06f7dd204-webhook-cert\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956815 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bljkq\" (UniqueName: \"kubernetes.io/projected/1de23081-5abe-4824-b62b-17a083c43073-kube-api-access-bljkq\") pod \"multus-admission-controller-857f4d67dd-hz9rx\" (UID: \"1de23081-5abe-4824-b62b-17a083c43073\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956856 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-etcd-client\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956886 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956917 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-etcd-ca\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956942 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56dee690-ebb9-4ad9-a51e-ba1f77597d94-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-77hq4\" (UID: \"56dee690-ebb9-4ad9-a51e-ba1f77597d94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956971 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-serving-cert\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.956997 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c3c207-7bf4-414a-893f-642946b0fe8f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8lxbm\" (UID: \"07c3c207-7bf4-414a-893f-642946b0fe8f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957025 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8rkb\" (UniqueName: \"kubernetes.io/projected/8c098c94-1752-4ea9-a292-e650d2b73ab6-kube-api-access-c8rkb\") pod \"service-ca-9c57cc56f-4fdnm\" (UID: \"8c098c94-1752-4ea9-a292-e650d2b73ab6\") " pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957051 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z6bj\" (UniqueName: \"kubernetes.io/projected/648556f6-8682-4cf8-beaa-bdf944bb7f14-kube-api-access-4z6bj\") pod \"control-plane-machine-set-operator-78cbb6b69f-j75n2\" (UID: \"648556f6-8682-4cf8-beaa-bdf944bb7f14\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957094 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0d8c6f-30f2-4549-a63b-4c332145ea4e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-72g7q\" (UID: \"5b0d8c6f-30f2-4549-a63b-4c332145ea4e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957125 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-service-ca\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957150 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-662fh\" (UniqueName: \"kubernetes.io/projected/56dee690-ebb9-4ad9-a51e-ba1f77597d94-kube-api-access-662fh\") pod \"kube-storage-version-migrator-operator-b67b599dd-77hq4\" (UID: \"56dee690-ebb9-4ad9-a51e-ba1f77597d94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957179 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xngvc\" (UniqueName: \"kubernetes.io/projected/2772457b-561e-4348-b816-e9d472c4678d-kube-api-access-xngvc\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957217 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dd18831-c5ce-4983-9c86-4ac4a560e0a6-serving-cert\") pod \"service-ca-operator-777779d784-btdm7\" (UID: \"7dd18831-c5ce-4983-9c86-4ac4a560e0a6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957239 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-oauth-serving-cert\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957264 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-socket-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957300 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9db8506-ddd2-448c-b9ce-1d16b1cae0d6-metrics-tls\") pod \"dns-default-hv5s8\" (UID: \"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6\") " pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957324 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/569d48b9-e7f0-4e8f-b0e5-9376475359c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-v6d74\" (UID: \"569d48b9-e7f0-4e8f-b0e5-9376475359c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957351 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbqvb\" (UniqueName: \"kubernetes.io/projected/5b0d8c6f-30f2-4549-a63b-4c332145ea4e-kube-api-access-fbqvb\") pod \"openshift-controller-manager-operator-756b6f6bc6-72g7q\" (UID: \"5b0d8c6f-30f2-4549-a63b-4c332145ea4e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957378 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v24kz\" (UniqueName: \"kubernetes.io/projected/af1871ae-05fe-4597-8bb9-e2525f739922-kube-api-access-v24kz\") pod \"marketplace-operator-79b997595-dd7xb\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957405 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/563a62ee-1dc1-4dfe-a33c-eb671f426a37-stats-auth\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957430 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05e846f8-1ebe-4926-8e94-784b94c246c6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957453 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-client-ca\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957815 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2772457b-561e-4348-b816-e9d472c4678d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.958280 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-trusted-ca-bundle\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.958397 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-plugins-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.958573 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-serving-cert\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.959165 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/563a62ee-1dc1-4dfe-a33c-eb671f426a37-metrics-certs\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.959283 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6nv6f\" (UID: \"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.959558 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c59995-27b2-4192-826d-f22e742dae38-config\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.959612 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/563a62ee-1dc1-4dfe-a33c-eb671f426a37-service-ca-bundle\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.961716 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c9121e17-1727-4f92-8aa3-e636d63fc1da-srv-cert\") pod \"olm-operator-6b444d44fb-z4stt\" (UID: \"c9121e17-1727-4f92-8aa3-e636d63fc1da\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.962200 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de23081-5abe-4824-b62b-17a083c43073-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hz9rx\" (UID: \"1de23081-5abe-4824-b62b-17a083c43073\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.962658 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/145eff0c-dcb1-47ac-be8b-10d06f7dd204-webhook-cert\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.962744 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56dee690-ebb9-4ad9-a51e-ba1f77597d94-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-77hq4\" (UID: \"56dee690-ebb9-4ad9-a51e-ba1f77597d94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.962987 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-csi-data-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.963212 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/569d48b9-e7f0-4e8f-b0e5-9376475359c4-srv-cert\") pod \"catalog-operator-68c6474976-v6d74\" (UID: \"569d48b9-e7f0-4e8f-b0e5-9376475359c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.963292 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9-proxy-tls\") pod \"machine-config-controller-84d6567774-6nv6f\" (UID: \"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.963603 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9afea6f6-4124-424d-a820-33b15ec35121-metrics-tls\") pod \"dns-operator-744455d44c-cthth\" (UID: \"9afea6f6-4124-424d-a820-33b15ec35121\") " pod="openshift-dns-operator/dns-operator-744455d44c-cthth" Jan 05 21:54:04 crc kubenswrapper[5034]: E0105 21:54:04.963642 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.463626945 +0000 UTC m=+137.835626384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.964290 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.964456 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05e846f8-1ebe-4926-8e94-784b94c246c6-metrics-tls\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.964553 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-oauth-serving-cert\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.964823 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-socket-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.957378 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05e846f8-1ebe-4926-8e94-784b94c246c6-trusted-ca\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.965295 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56dee690-ebb9-4ad9-a51e-ba1f77597d94-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-77hq4\" (UID: \"56dee690-ebb9-4ad9-a51e-ba1f77597d94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.965654 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-service-ca\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.965721 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9f9df170-8b91-4370-8dce-46e91312904c-mountpoint-dir\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.966399 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0d8c6f-30f2-4549-a63b-4c332145ea4e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-72g7q\" (UID: \"5b0d8c6f-30f2-4549-a63b-4c332145ea4e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.966617 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/563a62ee-1dc1-4dfe-a33c-eb671f426a37-default-certificate\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.967193 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c9121e17-1727-4f92-8aa3-e636d63fc1da-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z4stt\" (UID: \"c9121e17-1727-4f92-8aa3-e636d63fc1da\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.967205 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87c59995-27b2-4192-826d-f22e742dae38-trusted-ca\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.967681 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1021ddb8-2b37-4da6-b560-17d91af60308-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wfd9f\" (UID: \"1021ddb8-2b37-4da6-b560-17d91af60308\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.968371 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-oauth-config\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.968551 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/145eff0c-dcb1-47ac-be8b-10d06f7dd204-apiservice-cert\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.969436 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b0d8c6f-30f2-4549-a63b-4c332145ea4e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-72g7q\" (UID: \"5b0d8c6f-30f2-4549-a63b-4c332145ea4e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.970000 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c59995-27b2-4192-826d-f22e742dae38-serving-cert\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.970157 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/569d48b9-e7f0-4e8f-b0e5-9376475359c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-v6d74\" (UID: \"569d48b9-e7f0-4e8f-b0e5-9376475359c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.970891 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/563a62ee-1dc1-4dfe-a33c-eb671f426a37-stats-auth\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:04 crc kubenswrapper[5034]: I0105 21:54:04.983039 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.002656 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.023661 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.042800 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.057832 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.058392 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.558363893 +0000 UTC m=+137.930363382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.063565 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.071685 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2772457b-561e-4348-b816-e9d472c4678d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.082755 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.096361 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.104155 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.123842 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.126767 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c098c94-1752-4ea9-a292-e650d2b73ab6-signing-cabundle\") pod \"service-ca-9c57cc56f-4fdnm\" (UID: \"8c098c94-1752-4ea9-a292-e650d2b73ab6\") " pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.131814 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.143103 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.149735 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c098c94-1752-4ea9-a292-e650d2b73ab6-signing-key\") pod \"service-ca-9c57cc56f-4fdnm\" (UID: \"8c098c94-1752-4ea9-a292-e650d2b73ab6\") " pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.159877 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.160297 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.660272585 +0000 UTC m=+138.032272024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.163111 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.166993 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.182985 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.202199 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.228931 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.230698 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dd7xb\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.242989 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.251369 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dd7xb\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.261231 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.261417 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.761390934 +0000 UTC m=+138.133390393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.262168 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.262653 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.263170 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.763154094 +0000 UTC m=+138.135153543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.283055 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.303315 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.323460 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.325937 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-certificates\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.328569 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-etcd-service-ca\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.328724 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-config\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.329875 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-trusted-ca\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.329936 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-etcd-client\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.332508 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-client-ca\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.333845 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-etcd-ca\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.333944 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21e89bb2-84f3-407b-966b-b1774d96da98-config-volume\") pod \"collect-profiles-29460825-ck6dh\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.334411 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143b2828-1125-4598-8d3a-44fdc8023b73-serving-cert\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.342277 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-config\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.342797 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21e89bb2-84f3-407b-966b-b1774d96da98-secret-volume\") pod \"collect-profiles-29460825-ck6dh\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.344349 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-serving-cert\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.345074 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.346845 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x7rtg"] Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.349803 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt"] Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.352567 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77"] Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.354468 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vpvt5"] Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.356336 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dd18831-c5ce-4983-9c86-4ac4a560e0a6-serving-cert\") pod \"service-ca-operator-777779d784-btdm7\" (UID: \"7dd18831-c5ce-4983-9c86-4ac4a560e0a6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.389854 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.390028 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.890001821 +0000 UTC m=+138.262001260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.390412 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.390602 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.390900 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: W0105 21:54:05.391284 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16522025_6bf5_4451_85e6_2df92d8164c2.slice/crio-f2704f6349bf61f41d1d23a94442e2b98b4322561f80f745173c6d394fe738d1 WatchSource:0}: Error finding container f2704f6349bf61f41d1d23a94442e2b98b4322561f80f745173c6d394fe738d1: Status 404 returned error can't find the container with id f2704f6349bf61f41d1d23a94442e2b98b4322561f80f745173c6d394fe738d1 Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.392950 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd18831-c5ce-4983-9c86-4ac4a560e0a6-config\") pod \"service-ca-operator-777779d784-btdm7\" (UID: \"7dd18831-c5ce-4983-9c86-4ac4a560e0a6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.397243 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.897220695 +0000 UTC m=+138.269220134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: W0105 21:54:05.401910 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2510464_62e1_4d58_913e_35f87bed60d7.slice/crio-6225d8dcda33233608cd62051d2576aadc0bbf4b4e10d66c4b0b8a665967d977 WatchSource:0}: Error finding container 6225d8dcda33233608cd62051d2576aadc0bbf4b4e10d66c4b0b8a665967d977: Status 404 returned error can't find the container with id 6225d8dcda33233608cd62051d2576aadc0bbf4b4e10d66c4b0b8a665967d977 Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.412701 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.423169 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.442536 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.449838 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/648556f6-8682-4cf8-beaa-bdf944bb7f14-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j75n2\" (UID: \"648556f6-8682-4cf8-beaa-bdf944bb7f14\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.464035 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.483287 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.490718 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.491249 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:05.991232783 +0000 UTC m=+138.363232222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.502934 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.522886 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.528578 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-94ljp"] Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.542735 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.548864 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c3c207-7bf4-414a-893f-642946b0fe8f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8lxbm\" (UID: \"07c3c207-7bf4-414a-893f-642946b0fe8f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.569639 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.570920 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c3c207-7bf4-414a-893f-642946b0fe8f-config\") pod \"kube-apiserver-operator-766d6c64bb-8lxbm\" (UID: \"07c3c207-7bf4-414a-893f-642946b0fe8f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.577884 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g2l2g"] Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.584028 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.592672 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.593009 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:06.092997711 +0000 UTC m=+138.464997150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: W0105 21:54:05.604243 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6db10d40_2f3c_44aa_a116_8f5ffa8577cd.slice/crio-95dfbc7feebd24178b7574e09efca50c4fa462ff11dd4ea326533606ef69d453 WatchSource:0}: Error finding container 95dfbc7feebd24178b7574e09efca50c4fa462ff11dd4ea326533606ef69d453: Status 404 returned error can't find the container with id 95dfbc7feebd24178b7574e09efca50c4fa462ff11dd4ea326533606ef69d453 Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.605467 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.610275 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdbd6"] Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.616927 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5100f0a0-b59f-49ba-8c1f-647dfba1c314-node-bootstrap-token\") pod \"machine-config-server-rttcs\" (UID: \"5100f0a0-b59f-49ba-8c1f-647dfba1c314\") " pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.623329 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.628340 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5100f0a0-b59f-49ba-8c1f-647dfba1c314-certs\") pod \"machine-config-server-rttcs\" (UID: \"5100f0a0-b59f-49ba-8c1f-647dfba1c314\") " pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.643470 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.663055 5034 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 05 21:54:05 crc kubenswrapper[5034]: W0105 21:54:05.682118 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3880fa85_26b0_4ed9_9b69_fe57b8c01092.slice/crio-cf4d81ca4e848b5e6dc0efb44b36f72624996412b351f209f7eba6fa120f549b WatchSource:0}: Error finding container cf4d81ca4e848b5e6dc0efb44b36f72624996412b351f209f7eba6fa120f549b: Status 404 returned error can't find the container with id cf4d81ca4e848b5e6dc0efb44b36f72624996412b351f209f7eba6fa120f549b Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.682642 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.693120 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.693525 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:06.193490622 +0000 UTC m=+138.565490141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.693742 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.694134 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:06.19412281 +0000 UTC m=+138.566122249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.702861 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.725512 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.742729 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.765815 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.772851 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b5e6e22-944f-4219-88bc-ab40a4fe37a9-cert\") pod \"ingress-canary-dm8kk\" (UID: \"9b5e6e22-944f-4219-88bc-ab40a4fe37a9\") " pod="openshift-ingress-canary/ingress-canary-dm8kk" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.783194 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.786864 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9db8506-ddd2-448c-b9ce-1d16b1cae0d6-config-volume\") pod \"dns-default-hv5s8\" (UID: \"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6\") " pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.795407 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.796393 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:06.296343511 +0000 UTC m=+138.668342950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.801538 5034 request.go:700] Waited for 1.931048412s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0 Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.803135 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.810712 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9db8506-ddd2-448c-b9ce-1d16b1cae0d6-metrics-tls\") pod \"dns-default-hv5s8\" (UID: \"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6\") " pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.825587 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.844631 5034 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xdbd6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.844712 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" podUID="3880fa85-26b0-4ed9-9b69-fe57b8c01092" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.849646 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.849693 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" event={"ID":"3880fa85-26b0-4ed9-9b69-fe57b8c01092","Type":"ContainerStarted","Data":"e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.849710 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" event={"ID":"3880fa85-26b0-4ed9-9b69-fe57b8c01092","Type":"ContainerStarted","Data":"cf4d81ca4e848b5e6dc0efb44b36f72624996412b351f209f7eba6fa120f549b"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.849723 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" event={"ID":"03513125-c6f9-46c6-a4b6-87a82b869132","Type":"ContainerStarted","Data":"4695a5f276753ff9c95c025ede594b295ecd2ccb9012049cdfcdfa25ecc509cc"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.849736 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" event={"ID":"03513125-c6f9-46c6-a4b6-87a82b869132","Type":"ContainerStarted","Data":"9dda56fdf4d17d70c915be32043a5b817f48e3be970a337ba1ddc41ad15e74da"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.850194 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" event={"ID":"579527a6-1737-40f2-8cfa-1798cc770142","Type":"ContainerStarted","Data":"72dba5b451f67d68e328c4f330b290a4de8b6fe1a133d6e99fec5e5a924a54b6"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.850245 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" event={"ID":"579527a6-1737-40f2-8cfa-1798cc770142","Type":"ContainerStarted","Data":"55ac4da7d30dc0bd480412d9e42878c9c44528c22a14d6118fee9013ad8887ed"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.853849 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" event={"ID":"ff056103-f552-4ce0-a4d1-83570b0ef42c","Type":"ContainerStarted","Data":"4c877e1a370c6d476518ceeaf0df194e99e84db358549fb0f056deb45471efc0"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.853876 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" event={"ID":"ff056103-f552-4ce0-a4d1-83570b0ef42c","Type":"ContainerStarted","Data":"6ff1fa854d85dfd8925c0d1d124e333f48d90d7a940ad9d8d8663c895650f9da"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.855185 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" event={"ID":"6db10d40-2f3c-44aa-a116-8f5ffa8577cd","Type":"ContainerStarted","Data":"95dfbc7feebd24178b7574e09efca50c4fa462ff11dd4ea326533606ef69d453"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.856613 5034 generic.go:334] "Generic (PLEG): container finished" podID="16522025-6bf5-4451-85e6-2df92d8164c2" containerID="439c2295845562516551fe0805889b3b7588de63e227ec0087c74c3cb6e77359" exitCode=0 Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.856706 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" event={"ID":"16522025-6bf5-4451-85e6-2df92d8164c2","Type":"ContainerDied","Data":"439c2295845562516551fe0805889b3b7588de63e227ec0087c74c3cb6e77359"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.856757 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" event={"ID":"16522025-6bf5-4451-85e6-2df92d8164c2","Type":"ContainerStarted","Data":"f2704f6349bf61f41d1d23a94442e2b98b4322561f80f745173c6d394fe738d1"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.858130 5034 generic.go:334] "Generic (PLEG): container finished" podID="6019a2f9-5524-4776-851a-e30c348536d0" containerID="24db1c152b7535e531c04f815949b84dada31b922256963afaba3b670ea7b824" exitCode=0 Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.858279 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" event={"ID":"6019a2f9-5524-4776-851a-e30c348536d0","Type":"ContainerDied","Data":"24db1c152b7535e531c04f815949b84dada31b922256963afaba3b670ea7b824"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.858324 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" event={"ID":"6019a2f9-5524-4776-851a-e30c348536d0","Type":"ContainerStarted","Data":"d977eea8a8756ff9c93b3a4f829aa107918c6c59bb8bc0e081fffb0f6aa19f17"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.860463 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" event={"ID":"c2510464-62e1-4d58-913e-35f87bed60d7","Type":"ContainerStarted","Data":"1a066035b9a120df9166e70fbac0f329901e11e535203d822c4ed6d57867e9f7"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.860501 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" event={"ID":"c2510464-62e1-4d58-913e-35f87bed60d7","Type":"ContainerStarted","Data":"6225d8dcda33233608cd62051d2576aadc0bbf4b4e10d66c4b0b8a665967d977"} Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.880474 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hm2l\" (UniqueName: \"kubernetes.io/projected/7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c-kube-api-access-8hm2l\") pod \"machine-config-operator-74547568cd-69m5r\" (UID: \"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.896596 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8lpr\" (UniqueName: \"kubernetes.io/projected/765481c2-452a-44d8-bcc9-ba2b9e653a8b-kube-api-access-h8lpr\") pod \"apiserver-7bbb656c7d-h7fvf\" (UID: \"765481c2-452a-44d8-bcc9-ba2b9e653a8b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.898800 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:05 crc kubenswrapper[5034]: E0105 21:54:05.903763 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:06.403745808 +0000 UTC m=+138.775745457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.907031 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.919072 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qll8h\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-kube-api-access-qll8h\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.927630 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.940817 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/055944f7-534d-4273-9960-3659d5751c2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-86f42\" (UID: \"055944f7-534d-4273-9960-3659d5751c2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.960496 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-bound-sa-token\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:05 crc kubenswrapper[5034]: I0105 21:54:05.995437 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4mzz\" (UniqueName: \"kubernetes.io/projected/9ff8a90e-11c2-4cf9-b8db-e5c90a552709-kube-api-access-f4mzz\") pod \"migrator-59844c95c7-2w4bs\" (UID: \"9ff8a90e-11c2-4cf9-b8db-e5c90a552709\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.000771 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:06 crc kubenswrapper[5034]: E0105 21:54:06.001704 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:06.501682207 +0000 UTC m=+138.873681646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.024654 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726ec7f1-554d-46b9-83ff-bd08e7e8fb2a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k9zzr\" (UID: \"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.027304 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrvn5\" (UniqueName: \"kubernetes.io/projected/285edf94-ce4e-4226-bf7f-aff67d967a6e-kube-api-access-nrvn5\") pod \"cluster-samples-operator-665b6dd947-l6glc\" (UID: \"285edf94-ce4e-4226-bf7f-aff67d967a6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.046373 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gj2x\" (UniqueName: \"kubernetes.io/projected/3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441-kube-api-access-8gj2x\") pod \"etcd-operator-b45778765-kjmjm\" (UID: \"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.057664 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66gkq\" (UniqueName: \"kubernetes.io/projected/4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9-kube-api-access-66gkq\") pod \"machine-config-controller-84d6567774-6nv6f\" (UID: \"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.082838 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npzfs\" (UniqueName: \"kubernetes.io/projected/7dd18831-c5ce-4983-9c86-4ac4a560e0a6-kube-api-access-npzfs\") pod \"service-ca-operator-777779d784-btdm7\" (UID: \"7dd18831-c5ce-4983-9c86-4ac4a560e0a6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.102312 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:06 crc kubenswrapper[5034]: E0105 21:54:06.102648 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:06.602637592 +0000 UTC m=+138.974637031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.111276 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2772457b-561e-4348-b816-e9d472c4678d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.119558 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg25d\" (UniqueName: \"kubernetes.io/projected/f4747c26-8a6b-4d60-ae91-36f9d7b86f14-kube-api-access-jg25d\") pod \"downloads-7954f5f757-xz2sp\" (UID: \"f4747c26-8a6b-4d60-ae91-36f9d7b86f14\") " pod="openshift-console/downloads-7954f5f757-xz2sp" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.123954 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.140575 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.154528 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xz2sp" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.155482 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w5nc\" (UniqueName: \"kubernetes.io/projected/87c59995-27b2-4192-826d-f22e742dae38-kube-api-access-7w5nc\") pod \"console-operator-58897d9998-f94p5\" (UID: \"87c59995-27b2-4192-826d-f22e742dae38\") " pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.168337 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42jb5\" (UniqueName: \"kubernetes.io/projected/c9db8506-ddd2-448c-b9ce-1d16b1cae0d6-kube-api-access-42jb5\") pod \"dns-default-hv5s8\" (UID: \"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6\") " pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.179532 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xfk2\" (UniqueName: \"kubernetes.io/projected/9f9df170-8b91-4370-8dce-46e91312904c-kube-api-access-2xfk2\") pod \"csi-hostpathplugin-47rdk\" (UID: \"9f9df170-8b91-4370-8dce-46e91312904c\") " pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.187557 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-47rdk" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.203213 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.203360 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:06 crc kubenswrapper[5034]: E0105 21:54:06.203844 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:06.703824543 +0000 UTC m=+139.075823982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.215516 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gff2t\" (UniqueName: \"kubernetes.io/projected/21e89bb2-84f3-407b-966b-b1774d96da98-kube-api-access-gff2t\") pod \"collect-profiles-29460825-ck6dh\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.226686 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7kc4\" (UniqueName: \"kubernetes.io/projected/c9121e17-1727-4f92-8aa3-e636d63fc1da-kube-api-access-g7kc4\") pod \"olm-operator-6b444d44fb-z4stt\" (UID: \"c9121e17-1727-4f92-8aa3-e636d63fc1da\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.249803 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.249949 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.251515 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.260510 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c3c207-7bf4-414a-893f-642946b0fe8f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8lxbm\" (UID: \"07c3c207-7bf4-414a-893f-642946b0fe8f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.293865 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.295173 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rm2g\" (UniqueName: \"kubernetes.io/projected/d23b0bf5-8bd5-4891-b101-a278b984dbcf-kube-api-access-9rm2g\") pod \"console-f9d7485db-8wssg\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.295997 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6hnr\" (UniqueName: \"kubernetes.io/projected/569d48b9-e7f0-4e8f-b0e5-9376475359c4-kube-api-access-p6hnr\") pod \"catalog-operator-68c6474976-v6d74\" (UID: \"569d48b9-e7f0-4e8f-b0e5-9376475359c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.301238 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.303331 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glzsf\" (UniqueName: \"kubernetes.io/projected/143b2828-1125-4598-8d3a-44fdc8023b73-kube-api-access-glzsf\") pod \"route-controller-manager-6576b87f9c-c9bmq\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.304638 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:06 crc kubenswrapper[5034]: E0105 21:54:06.304947 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:06.804931812 +0000 UTC m=+139.176931251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.334893 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.335985 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r"] Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.339325 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b7kq\" (UniqueName: \"kubernetes.io/projected/05e846f8-1ebe-4926-8e94-784b94c246c6-kube-api-access-4b7kq\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.346306 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-662fh\" (UniqueName: \"kubernetes.io/projected/56dee690-ebb9-4ad9-a51e-ba1f77597d94-kube-api-access-662fh\") pod \"kube-storage-version-migrator-operator-b67b599dd-77hq4\" (UID: \"56dee690-ebb9-4ad9-a51e-ba1f77597d94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.346899 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.348612 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf"] Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.372353 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bljkq\" (UniqueName: \"kubernetes.io/projected/1de23081-5abe-4824-b62b-17a083c43073-kube-api-access-bljkq\") pod \"multus-admission-controller-857f4d67dd-hz9rx\" (UID: \"1de23081-5abe-4824-b62b-17a083c43073\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.388973 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.409863 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:06 crc kubenswrapper[5034]: E0105 21:54:06.410420 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:06.910401664 +0000 UTC m=+139.282401103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.410658 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9cnt\" (UniqueName: \"kubernetes.io/projected/5100f0a0-b59f-49ba-8c1f-647dfba1c314-kube-api-access-z9cnt\") pod \"machine-config-server-rttcs\" (UID: \"5100f0a0-b59f-49ba-8c1f-647dfba1c314\") " pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.420283 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.424711 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48gq4\" (UniqueName: \"kubernetes.io/projected/563a62ee-1dc1-4dfe-a33c-eb671f426a37-kube-api-access-48gq4\") pod \"router-default-5444994796-lhk82\" (UID: \"563a62ee-1dc1-4dfe-a33c-eb671f426a37\") " pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.449176 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.464610 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rttcs" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.466899 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xngvc\" (UniqueName: \"kubernetes.io/projected/2772457b-561e-4348-b816-e9d472c4678d-kube-api-access-xngvc\") pod \"cluster-image-registry-operator-dc59b4c8b-b477s\" (UID: \"2772457b-561e-4348-b816-e9d472c4678d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.486968 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctzpv\" (UniqueName: \"kubernetes.io/projected/1021ddb8-2b37-4da6-b560-17d91af60308-kube-api-access-ctzpv\") pod \"package-server-manager-789f6589d5-wfd9f\" (UID: \"1021ddb8-2b37-4da6-b560-17d91af60308\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.489953 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8rkb\" (UniqueName: \"kubernetes.io/projected/8c098c94-1752-4ea9-a292-e650d2b73ab6-kube-api-access-c8rkb\") pod \"service-ca-9c57cc56f-4fdnm\" (UID: \"8c098c94-1752-4ea9-a292-e650d2b73ab6\") " pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.499666 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6xh\" (UniqueName: \"kubernetes.io/projected/9afea6f6-4124-424d-a820-33b15ec35121-kube-api-access-lj6xh\") pod \"dns-operator-744455d44c-cthth\" (UID: \"9afea6f6-4124-424d-a820-33b15ec35121\") " pod="openshift-dns-operator/dns-operator-744455d44c-cthth" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.514931 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:06 crc kubenswrapper[5034]: E0105 21:54:06.515255 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.015243609 +0000 UTC m=+139.387243048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.517326 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z6bj\" (UniqueName: \"kubernetes.io/projected/648556f6-8682-4cf8-beaa-bdf944bb7f14-kube-api-access-4z6bj\") pod \"control-plane-machine-set-operator-78cbb6b69f-j75n2\" (UID: \"648556f6-8682-4cf8-beaa-bdf944bb7f14\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.521251 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb2km\" (UniqueName: \"kubernetes.io/projected/145eff0c-dcb1-47ac-be8b-10d06f7dd204-kube-api-access-zb2km\") pod \"packageserver-d55dfcdfc-4j99d\" (UID: \"145eff0c-dcb1-47ac-be8b-10d06f7dd204\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.549813 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtkb7\" (UniqueName: \"kubernetes.io/projected/9b5e6e22-944f-4219-88bc-ab40a4fe37a9-kube-api-access-gtkb7\") pod \"ingress-canary-dm8kk\" (UID: \"9b5e6e22-944f-4219-88bc-ab40a4fe37a9\") " pod="openshift-ingress-canary/ingress-canary-dm8kk" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.559349 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.565857 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.571972 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.588421 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.588823 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v24kz\" (UniqueName: \"kubernetes.io/projected/af1871ae-05fe-4597-8bb9-e2525f739922-kube-api-access-v24kz\") pod \"marketplace-operator-79b997595-dd7xb\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.602200 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbqvb\" (UniqueName: \"kubernetes.io/projected/5b0d8c6f-30f2-4549-a63b-4c332145ea4e-kube-api-access-fbqvb\") pod \"openshift-controller-manager-operator-756b6f6bc6-72g7q\" (UID: \"5b0d8c6f-30f2-4549-a63b-4c332145ea4e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.611733 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.617199 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:06 crc kubenswrapper[5034]: E0105 21:54:06.617565 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.117550791 +0000 UTC m=+139.489550230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.617644 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cthth" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.629278 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.629288 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05e846f8-1ebe-4926-8e94-784b94c246c6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hpkhl\" (UID: \"05e846f8-1ebe-4926-8e94-784b94c246c6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.664408 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.672782 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.688767 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.698673 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.716929 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.724160 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:06 crc kubenswrapper[5034]: E0105 21:54:06.724437 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.224425903 +0000 UTC m=+139.596425342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.738398 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-btdm7"] Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.742775 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.793804 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dm8kk" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.825436 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:06 crc kubenswrapper[5034]: E0105 21:54:06.826336 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.326312774 +0000 UTC m=+139.698312213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.888294 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.891196 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" event={"ID":"16522025-6bf5-4451-85e6-2df92d8164c2","Type":"ContainerStarted","Data":"f4fb7c0e1f6e5ec7169a486aae5358d508491d6b990817e76aad866a1da176bc"} Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.891974 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.895052 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" event={"ID":"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c","Type":"ContainerStarted","Data":"d3d830228d70c4d2e17c2aed9a4562294143afaa232c4525f8e7a58b49461b71"} Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.896738 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" event={"ID":"6db10d40-2f3c-44aa-a116-8f5ffa8577cd","Type":"ContainerStarted","Data":"9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b"} Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.897817 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.906145 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" event={"ID":"6019a2f9-5524-4776-851a-e30c348536d0","Type":"ContainerStarted","Data":"46b9242223a0e94c3e6ddb07e7ca5a31e7ffc068f46de60aff7d561151640d7e"} Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.908561 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" event={"ID":"765481c2-452a-44d8-bcc9-ba2b9e653a8b","Type":"ContainerStarted","Data":"94fa9dc4f121a7c5888aa2453009df7bb1fa835bffe73b6f022b80ca918256f2"} Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.910551 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" event={"ID":"579527a6-1737-40f2-8cfa-1798cc770142","Type":"ContainerStarted","Data":"b555c49c8b5a3685d34a2305d49323afa2c1481c090d2a1812efd24e95175e98"} Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.926914 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:06 crc kubenswrapper[5034]: E0105 21:54:06.935152 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.435134911 +0000 UTC m=+139.807134360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:06 crc kubenswrapper[5034]: I0105 21:54:06.978013 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.028688 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.030613 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.53058158 +0000 UTC m=+139.902581019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: W0105 21:54:07.112217 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dd18831_c5ce_4983_9c86_4ac4a560e0a6.slice/crio-bb41e854452a5d54c5abc0188bea6219f0761a92c6363a2c4d80267c152b0fb2 WatchSource:0}: Error finding container bb41e854452a5d54c5abc0188bea6219f0761a92c6363a2c4d80267c152b0fb2: Status 404 returned error can't find the container with id bb41e854452a5d54c5abc0188bea6219f0761a92c6363a2c4d80267c152b0fb2 Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.121561 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4xwvv" podStartSLOduration=121.121545852 podStartE2EDuration="2m1.121545852s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:07.121186242 +0000 UTC m=+139.493185681" watchObservedRunningTime="2026-01-05 21:54:07.121545852 +0000 UTC m=+139.493545291" Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.130764 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.131277 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.631265217 +0000 UTC m=+140.003264656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.231832 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xz2sp"] Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.231925 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.242778 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.742746249 +0000 UTC m=+140.114745688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.345736 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.346131 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.846116342 +0000 UTC m=+140.218115781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.365094 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc"] Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.446407 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.446585 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.946561893 +0000 UTC m=+140.318561332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.447003 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.447300 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:07.947290413 +0000 UTC m=+140.319289862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.448579 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.548131 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.548434 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.048419803 +0000 UTC m=+140.420419232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.612985 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" podStartSLOduration=121.612968748 podStartE2EDuration="2m1.612968748s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:07.612956218 +0000 UTC m=+139.984955657" watchObservedRunningTime="2026-01-05 21:54:07.612968748 +0000 UTC m=+139.984968187" Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.647765 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ghp77" podStartSLOduration=121.647746131 podStartE2EDuration="2m1.647746131s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:07.646557448 +0000 UTC m=+140.018556887" watchObservedRunningTime="2026-01-05 21:54:07.647746131 +0000 UTC m=+140.019745570" Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.649149 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.649531 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.149519932 +0000 UTC m=+140.521519371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.750149 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.750439 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.250422985 +0000 UTC m=+140.622422424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.786100 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" podStartSLOduration=121.786071163 podStartE2EDuration="2m1.786071163s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:07.785857117 +0000 UTC m=+140.157856556" watchObservedRunningTime="2026-01-05 21:54:07.786071163 +0000 UTC m=+140.158070602" Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.853314 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.853831 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.353817888 +0000 UTC m=+140.725817327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.954718 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.954954 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.454926957 +0000 UTC m=+140.826926396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.955235 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:07 crc kubenswrapper[5034]: E0105 21:54:07.955728 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.45571874 +0000 UTC m=+140.827718179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.961465 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" podStartSLOduration=121.960836924 podStartE2EDuration="2m1.960836924s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:07.95928398 +0000 UTC m=+140.331283419" watchObservedRunningTime="2026-01-05 21:54:07.960836924 +0000 UTC m=+140.332836363" Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.972143 5034 patch_prober.go:28] interesting pod/downloads-7954f5f757-xz2sp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.972194 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xz2sp" podUID="f4747c26-8a6b-4d60-ae91-36f9d7b86f14" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 05 21:54:07 crc kubenswrapper[5034]: I0105 21:54:07.977928 5034 generic.go:334] "Generic (PLEG): container finished" podID="765481c2-452a-44d8-bcc9-ba2b9e653a8b" containerID="0a87d3d482b005fae86cc9194e81a970b51f01efbc04f0086fe82a2fafcefaaf" exitCode=0 Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009793 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xz2sp" Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009822 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" event={"ID":"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c","Type":"ContainerStarted","Data":"b0390819bdb02132674986344fe4e2fcee4ff5b769958a8a05db1c1e685726fc"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009839 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" event={"ID":"7b90275a-e2cb-4f9f-99f9-fbdc4d9cbb9c","Type":"ContainerStarted","Data":"a227291fb6b1a47464d970a6780e91fa55320457c2145572059f1aa4e14da303"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009864 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" event={"ID":"6019a2f9-5524-4776-851a-e30c348536d0","Type":"ContainerStarted","Data":"d992a2c121692a3c8fec06ddc07e4fff9287221cbb34fbf6cabcdb42ce72eb7e"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009886 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lhk82" event={"ID":"563a62ee-1dc1-4dfe-a33c-eb671f426a37","Type":"ContainerStarted","Data":"868ac2ac59500a61e70d6faf68df799f4ce78ace76b5a4ae9c750a6f690dcc36"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009895 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lhk82" event={"ID":"563a62ee-1dc1-4dfe-a33c-eb671f426a37","Type":"ContainerStarted","Data":"c00853b1fc380181ea9adebffabbc7e5e5dfe2191a352de096ef5b4328ca07b1"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009906 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xz2sp" event={"ID":"f4747c26-8a6b-4d60-ae91-36f9d7b86f14","Type":"ContainerStarted","Data":"2197f031dd57a1ef16f6ba7e12916fd225f257b20df04c993149f6f4a8e25185"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009914 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xz2sp" event={"ID":"f4747c26-8a6b-4d60-ae91-36f9d7b86f14","Type":"ContainerStarted","Data":"7ad9b5c96698fc81d0857a671693225d7ec6e22c6646b5b957a7626efd95bbfe"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009922 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rttcs" event={"ID":"5100f0a0-b59f-49ba-8c1f-647dfba1c314","Type":"ContainerStarted","Data":"2106cc98bb77c7b52cfc24eb1d83fa4e40f98485268182cae144fe28c167e48b"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009947 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rttcs" event={"ID":"5100f0a0-b59f-49ba-8c1f-647dfba1c314","Type":"ContainerStarted","Data":"5ab72e48d94c2397ae45d83a4466146b007e08cbf31e6091fa87c3adba141f7e"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009957 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" event={"ID":"765481c2-452a-44d8-bcc9-ba2b9e653a8b","Type":"ContainerDied","Data":"0a87d3d482b005fae86cc9194e81a970b51f01efbc04f0086fe82a2fafcefaaf"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009971 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" event={"ID":"7dd18831-c5ce-4983-9c86-4ac4a560e0a6","Type":"ContainerStarted","Data":"aa5e2b2f899a6b9a2ae004c792f41f2781da511574b94db4d94061afd5ea5194"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.009980 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" event={"ID":"7dd18831-c5ce-4983-9c86-4ac4a560e0a6","Type":"ContainerStarted","Data":"bb41e854452a5d54c5abc0188bea6219f0761a92c6363a2c4d80267c152b0fb2"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.059661 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.060733 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.560702668 +0000 UTC m=+140.932702107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.080665 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-94ljp" podStartSLOduration=121.080647892 podStartE2EDuration="2m1.080647892s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:08.051628582 +0000 UTC m=+140.423628021" watchObservedRunningTime="2026-01-05 21:54:08.080647892 +0000 UTC m=+140.452647321" Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.163793 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.165147 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.665118761 +0000 UTC m=+141.037118200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.267119 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.267497 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.767478215 +0000 UTC m=+141.139477654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.320802 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7rtg" podStartSLOduration=122.320783642 podStartE2EDuration="2m2.320783642s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:08.318393905 +0000 UTC m=+140.690393354" watchObservedRunningTime="2026-01-05 21:54:08.320783642 +0000 UTC m=+140.692783101" Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.374799 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.375168 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.87515373 +0000 UTC m=+141.247153169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.477444 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.977419162 +0000 UTC m=+141.349418601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.477747 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.478343 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.480105 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:08.980071306 +0000 UTC m=+141.352070745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.520970 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xz2sp" podStartSLOduration=122.520952882 podStartE2EDuration="2m2.520952882s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:08.519459469 +0000 UTC m=+140.891458908" watchObservedRunningTime="2026-01-05 21:54:08.520952882 +0000 UTC m=+140.892952321" Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.572394 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rttcs" podStartSLOduration=5.572380706 podStartE2EDuration="5.572380706s" podCreationTimestamp="2026-01-05 21:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:08.571728217 +0000 UTC m=+140.943727646" watchObservedRunningTime="2026-01-05 21:54:08.572380706 +0000 UTC m=+140.944380145" Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.580667 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.580986 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.080971829 +0000 UTC m=+141.452971268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.652700 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-btdm7" podStartSLOduration=121.652683536 podStartE2EDuration="2m1.652683536s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:08.65138637 +0000 UTC m=+141.023385809" watchObservedRunningTime="2026-01-05 21:54:08.652683536 +0000 UTC m=+141.024682975" Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.674751 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.680440 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69m5r" podStartSLOduration=121.680419601 podStartE2EDuration="2m1.680419601s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:08.678580239 +0000 UTC m=+141.050579678" watchObservedRunningTime="2026-01-05 21:54:08.680419601 +0000 UTC m=+141.052419040" Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.682630 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.683307 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.183290792 +0000 UTC m=+141.555290231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.724685 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-lhk82" podStartSLOduration=121.724669432 podStartE2EDuration="2m1.724669432s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:08.724649901 +0000 UTC m=+141.096649340" watchObservedRunningTime="2026-01-05 21:54:08.724669432 +0000 UTC m=+141.096668871" Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.784801 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.785206 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.285190763 +0000 UTC m=+141.657190202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.876021 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" podStartSLOduration=122.876002391 podStartE2EDuration="2m2.876002391s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:08.845762486 +0000 UTC m=+141.217761935" watchObservedRunningTime="2026-01-05 21:54:08.876002391 +0000 UTC m=+141.248001830" Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.886186 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.886601 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.38657872 +0000 UTC m=+141.758578229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.987204 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.987325 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.487298768 +0000 UTC m=+141.859298207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.987453 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:08 crc kubenswrapper[5034]: E0105 21:54:08.987735 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.48772786 +0000 UTC m=+141.859727299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.992053 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" event={"ID":"285edf94-ce4e-4226-bf7f-aff67d967a6e","Type":"ContainerStarted","Data":"aa6af5982023a4fceb225a9c7064e3cc13326d0b70d0135b812b1967687df9aa"} Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.993173 5034 patch_prober.go:28] interesting pod/downloads-7954f5f757-xz2sp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 05 21:54:08 crc kubenswrapper[5034]: I0105 21:54:08.993203 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xz2sp" podUID="f4747c26-8a6b-4d60-ae91-36f9d7b86f14" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.027439 5034 patch_prober.go:28] interesting pod/router-default-5444994796-lhk82 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:54:09 crc kubenswrapper[5034]: [-]has-synced failed: reason withheld Jan 05 21:54:09 crc kubenswrapper[5034]: [+]process-running ok Jan 05 21:54:09 crc kubenswrapper[5034]: healthz check failed Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.027509 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhk82" podUID="563a62ee-1dc1-4dfe-a33c-eb671f426a37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.090769 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:09 crc kubenswrapper[5034]: E0105 21:54:09.092833 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.592812932 +0000 UTC m=+141.964812381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.193071 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:09 crc kubenswrapper[5034]: E0105 21:54:09.193457 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.693433127 +0000 UTC m=+142.065432566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.294739 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:09 crc kubenswrapper[5034]: E0105 21:54:09.294912 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.794886585 +0000 UTC m=+142.166886024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.295223 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:09 crc kubenswrapper[5034]: E0105 21:54:09.295692 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.795678838 +0000 UTC m=+142.167678287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.400518 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:09 crc kubenswrapper[5034]: E0105 21:54:09.400899 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:09.900884233 +0000 UTC m=+142.272883672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.505205 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr"] Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.505862 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:09 crc kubenswrapper[5034]: E0105 21:54:09.506190 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.00617817 +0000 UTC m=+142.378177609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:09 crc kubenswrapper[5034]: W0105 21:54:09.531428 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726ec7f1_554d_46b9_83ff_bd08e7e8fb2a.slice/crio-793182c046075c6019bd6e03c66927fbb46dff97c30105c3faac664c9003f535 WatchSource:0}: Error finding container 793182c046075c6019bd6e03c66927fbb46dff97c30105c3faac664c9003f535: Status 404 returned error can't find the container with id 793182c046075c6019bd6e03c66927fbb46dff97c30105c3faac664c9003f535 Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.598479 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-47rdk"] Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.609577 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:09 crc kubenswrapper[5034]: E0105 21:54:09.609985 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.109967305 +0000 UTC m=+142.481966744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.694769 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42"] Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.703263 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hv5s8"] Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.705213 5034 patch_prober.go:28] interesting pod/router-default-5444994796-lhk82 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:54:09 crc kubenswrapper[5034]: [-]has-synced failed: reason withheld Jan 05 21:54:09 crc kubenswrapper[5034]: [+]process-running ok Jan 05 21:54:09 crc kubenswrapper[5034]: healthz check failed Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.705257 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhk82" podUID="563a62ee-1dc1-4dfe-a33c-eb671f426a37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.710891 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:09 crc kubenswrapper[5034]: E0105 21:54:09.711243 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.211229608 +0000 UTC m=+142.583229047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.819279 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:09 crc kubenswrapper[5034]: E0105 21:54:09.819599 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.319582352 +0000 UTC m=+142.691581791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.825363 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.825887 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.831033 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d"] Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.854160 5034 patch_prober.go:28] interesting pod/apiserver-76f77b778f-vpvt5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]log ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]etcd ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]poststarthook/generic-apiserver-start-informers ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]poststarthook/max-in-flight-filter ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 05 21:54:09 crc kubenswrapper[5034]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 05 21:54:09 crc kubenswrapper[5034]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 05 21:54:09 crc kubenswrapper[5034]: [+]poststarthook/project.openshift.io-projectcache ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]poststarthook/openshift.io-startinformers ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 05 21:54:09 crc kubenswrapper[5034]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 05 21:54:09 crc kubenswrapper[5034]: livez check failed Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.854223 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" podUID="6019a2f9-5524-4776-851a-e30c348536d0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.920481 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:09 crc kubenswrapper[5034]: E0105 21:54:09.920909 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.420896207 +0000 UTC m=+142.792895646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.947313 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f"] Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.947343 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f94p5"] Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.947355 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs"] Jan 05 21:54:09 crc kubenswrapper[5034]: I0105 21:54:09.947363 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh"] Jan 05 21:54:10 crc kubenswrapper[5034]: W0105 21:54:10.004623 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff8a90e_11c2_4cf9_b8db_e5c90a552709.slice/crio-7747f8a63fe81622ee3d02ed671aba7dea98a6b86d669aced4b06f1574887d79 WatchSource:0}: Error finding container 7747f8a63fe81622ee3d02ed671aba7dea98a6b86d669aced4b06f1574887d79: Status 404 returned error can't find the container with id 7747f8a63fe81622ee3d02ed671aba7dea98a6b86d669aced4b06f1574887d79 Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.015545 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.028419 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.028811 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.528774497 +0000 UTC m=+142.900773986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.028859 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.029053 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" event={"ID":"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a","Type":"ContainerStarted","Data":"793182c046075c6019bd6e03c66927fbb46dff97c30105c3faac664c9003f535"} Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.029146 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.529135327 +0000 UTC m=+142.901134766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.035210 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" event={"ID":"055944f7-534d-4273-9960-3659d5751c2f","Type":"ContainerStarted","Data":"1319a2bb0d1cb8a22a47e92c525869d527deb7a730a844f2167b01668397fc63"} Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.064455 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" event={"ID":"145eff0c-dcb1-47ac-be8b-10d06f7dd204","Type":"ContainerStarted","Data":"a58a83bb33b1c39ac80acb50ea59009c0f040dd68e4b4be3ed92c64acc9b7e4e"} Jan 05 21:54:10 crc kubenswrapper[5034]: W0105 21:54:10.066336 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87c59995_27b2_4192_826d_f22e742dae38.slice/crio-aa370b3f4bb18005829bde7ec3c423d161f0951362ad665798b8b6aef7dc5fda WatchSource:0}: Error finding container aa370b3f4bb18005829bde7ec3c423d161f0951362ad665798b8b6aef7dc5fda: Status 404 returned error can't find the container with id aa370b3f4bb18005829bde7ec3c423d161f0951362ad665798b8b6aef7dc5fda Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.081001 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hv5s8" event={"ID":"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6","Type":"ContainerStarted","Data":"88ad3ee2f5f586f7ed14993ae78ccfcffc288a81b6e20eaac1b738e9b3de8f4d"} Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.115292 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dm8kk"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.120839 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" event={"ID":"285edf94-ce4e-4226-bf7f-aff67d967a6e","Type":"ContainerStarted","Data":"534e7402dce9185879db6e1072798fabcf2e3436bf945063f8d43fbd1fc478ff"} Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.120884 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" event={"ID":"285edf94-ce4e-4226-bf7f-aff67d967a6e","Type":"ContainerStarted","Data":"bc00972a06e618b53383ba9c4c4676e24965027cff4767c6e31fc42b9bbc8fa1"} Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.131921 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.132031 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.632009896 +0000 UTC m=+143.004009335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.132359 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.132691 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.632676795 +0000 UTC m=+143.004676234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.160364 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.161894 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-47rdk" event={"ID":"9f9df170-8b91-4370-8dce-46e91312904c","Type":"ContainerStarted","Data":"1121745be326e7c1c9aa18c660102d0f10136459b1b6f008ecb39d60796bd7e7"} Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.163157 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kjmjm"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.183362 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hz9rx"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.185230 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l6glc" podStartSLOduration=124.18521216 podStartE2EDuration="2m4.18521216s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:10.165011559 +0000 UTC m=+142.537011008" watchObservedRunningTime="2026-01-05 21:54:10.18521216 +0000 UTC m=+142.557211599" Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.185404 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.196672 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" event={"ID":"765481c2-452a-44d8-bcc9-ba2b9e653a8b","Type":"ContainerStarted","Data":"63ffb5727a4215eed2566e1d03e1d0b5039ac0afaff4779c229132ef6aec5877"} Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.213696 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.217757 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" podStartSLOduration=123.21774018 podStartE2EDuration="2m3.21774018s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:10.215476976 +0000 UTC m=+142.587476415" watchObservedRunningTime="2026-01-05 21:54:10.21774018 +0000 UTC m=+142.589739619" Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.224952 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" event={"ID":"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9","Type":"ContainerStarted","Data":"8c3ae9bf8f3b62b84991394e9d0692ad65bd7e9bb623e87b03d291e334e04425"} Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.233268 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.234121 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.734105463 +0000 UTC m=+143.106104902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: W0105 21:54:10.258684 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05e846f8_1ebe_4926_8e94_784b94c246c6.slice/crio-283cca695e7d61a1fd67996704c21b4070bdb0cfe78082721d73704c72071134 WatchSource:0}: Error finding container 283cca695e7d61a1fd67996704c21b4070bdb0cfe78082721d73704c72071134: Status 404 returned error can't find the container with id 283cca695e7d61a1fd67996704c21b4070bdb0cfe78082721d73704c72071134 Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.270728 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.283982 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.335456 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.336998 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.836984152 +0000 UTC m=+143.208983591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.352526 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.375519 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.381490 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.405170 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.410336 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4fdnm"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.429896 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8wssg"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.431510 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cthth"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.435955 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.436454 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:10.936432874 +0000 UTC m=+143.308432313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: W0105 21:54:10.472888 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2772457b_561e_4348_b816_e9d472c4678d.slice/crio-67b74b7464ae1c7c1af4897ab14f02d31c996b63c3129452a3e4294787052570 WatchSource:0}: Error finding container 67b74b7464ae1c7c1af4897ab14f02d31c996b63c3129452a3e4294787052570: Status 404 returned error can't find the container with id 67b74b7464ae1c7c1af4897ab14f02d31c996b63c3129452a3e4294787052570 Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.493601 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dd7xb"] Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.537630 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.538099 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.038061898 +0000 UTC m=+143.410061407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: W0105 21:54:10.608139 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf1871ae_05fe_4597_8bb9_e2525f739922.slice/crio-eb690f4a33341301ed0fcae36c7dde38b3547499ac71a34421d8e79dd5d97ff2 WatchSource:0}: Error finding container eb690f4a33341301ed0fcae36c7dde38b3547499ac71a34421d8e79dd5d97ff2: Status 404 returned error can't find the container with id eb690f4a33341301ed0fcae36c7dde38b3547499ac71a34421d8e79dd5d97ff2 Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.619879 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jcmjt" Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.644819 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.645299 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.145269749 +0000 UTC m=+143.517269188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.645609 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.645943 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.145935018 +0000 UTC m=+143.517934457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.681380 5034 patch_prober.go:28] interesting pod/router-default-5444994796-lhk82 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:54:10 crc kubenswrapper[5034]: [-]has-synced failed: reason withheld Jan 05 21:54:10 crc kubenswrapper[5034]: [+]process-running ok Jan 05 21:54:10 crc kubenswrapper[5034]: healthz check failed Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.681434 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhk82" podUID="563a62ee-1dc1-4dfe-a33c-eb671f426a37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.746919 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.747327 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.247306364 +0000 UTC m=+143.619305813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.849146 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.849741 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.349727611 +0000 UTC m=+143.721727050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.928747 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.933675 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.950023 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:10 crc kubenswrapper[5034]: I0105 21:54:10.950738 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:10 crc kubenswrapper[5034]: E0105 21:54:10.951061 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.451036165 +0000 UTC m=+143.823035604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.051928 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.052320 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.552303729 +0000 UTC m=+143.924303248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.153170 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.154014 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.653992434 +0000 UTC m=+144.025991873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.255308 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.255804 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.755786703 +0000 UTC m=+144.127786142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.267949 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" event={"ID":"145eff0c-dcb1-47ac-be8b-10d06f7dd204","Type":"ContainerStarted","Data":"5ae48ee09d49008ee990b7403d77a25600c6013ccded3dab6e907fa9ebafc5a8"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.269544 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.283481 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" event={"ID":"1021ddb8-2b37-4da6-b560-17d91af60308","Type":"ContainerStarted","Data":"5b3c80b86d9a0759d0a9eebb2112fdef74cd5fe4466c518e47f4401e8cd79c43"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.298266 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" event={"ID":"05e846f8-1ebe-4926-8e94-784b94c246c6","Type":"ContainerStarted","Data":"283cca695e7d61a1fd67996704c21b4070bdb0cfe78082721d73704c72071134"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.307783 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8wssg" event={"ID":"d23b0bf5-8bd5-4891-b101-a278b984dbcf","Type":"ContainerStarted","Data":"aa5a7ab18628dd832b0cf718cd668b2b1b327a06ec226231b472493b13dfd9a0"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.318670 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f94p5" event={"ID":"87c59995-27b2-4192-826d-f22e742dae38","Type":"ContainerStarted","Data":"711c98b88375ad27a3a0d1db9d7585042023c27babcaa0f6781acdeae35500e0"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.318721 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f94p5" event={"ID":"87c59995-27b2-4192-826d-f22e742dae38","Type":"ContainerStarted","Data":"aa370b3f4bb18005829bde7ec3c423d161f0951362ad665798b8b6aef7dc5fda"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.319785 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.327364 5034 patch_prober.go:28] interesting pod/console-operator-58897d9998-f94p5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.327428 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-f94p5" podUID="87c59995-27b2-4192-826d-f22e742dae38" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.329985 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-47rdk" event={"ID":"9f9df170-8b91-4370-8dce-46e91312904c","Type":"ContainerStarted","Data":"07845d1d2a2b5f7a07614f301d0e3d21e24becb443ac6f4658df136e6d865fbc"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.368799 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.370002 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.869987432 +0000 UTC m=+144.241986861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.370867 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" podStartSLOduration=124.370850286 podStartE2EDuration="2m4.370850286s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.303057489 +0000 UTC m=+143.675056928" watchObservedRunningTime="2026-01-05 21:54:11.370850286 +0000 UTC m=+143.742849725" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.371600 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-f94p5" podStartSLOduration=125.371596097 podStartE2EDuration="2m5.371596097s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.371483784 +0000 UTC m=+143.743483223" watchObservedRunningTime="2026-01-05 21:54:11.371596097 +0000 UTC m=+143.743595536" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.379731 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" event={"ID":"726ec7f1-554d-46b9-83ff-bd08e7e8fb2a","Type":"ContainerStarted","Data":"5f55c56a059489757ffb33301e8055366b7b2d88383c5ca00399b4660ca422fb"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.386426 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hv5s8" event={"ID":"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6","Type":"ContainerStarted","Data":"bb4be61cdc8c01e52a75977cd2cf06e8418dac612f5dfb8b6a980f3cd9424f5a"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.460570 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" event={"ID":"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9","Type":"ContainerStarted","Data":"1446c10254993e3cd096a9562145237ed0810f3ac1ace457e7424de68d0de139"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.470331 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.471651 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:11.971634436 +0000 UTC m=+144.343633875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.505420 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dm8kk" event={"ID":"9b5e6e22-944f-4219-88bc-ab40a4fe37a9","Type":"ContainerStarted","Data":"91b6cdf1c897cb6039883c5e7f060beda800bcbe8e7a3516bde1cd66de65f791"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.505461 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dm8kk" event={"ID":"9b5e6e22-944f-4219-88bc-ab40a4fe37a9","Type":"ContainerStarted","Data":"fb54e0c6095ddf7b76751322398571f7a74451114c0c6b15d40b2588779812d9"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.515842 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" event={"ID":"1de23081-5abe-4824-b62b-17a083c43073","Type":"ContainerStarted","Data":"d903bb3657ef087ade782f06b5effd35b5076973cf8bd87637b58bd69973a37e"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.529015 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" event={"ID":"56dee690-ebb9-4ad9-a51e-ba1f77597d94","Type":"ContainerStarted","Data":"d4414e89f626b2ff7739c369cfb6f502c53bcb78941b469a183817ee8a13c646"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.534325 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k9zzr" podStartSLOduration=124.534308438 podStartE2EDuration="2m4.534308438s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.422847916 +0000 UTC m=+143.794847355" watchObservedRunningTime="2026-01-05 21:54:11.534308438 +0000 UTC m=+143.906307887" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.534840 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dm8kk" podStartSLOduration=8.534833793 podStartE2EDuration="8.534833793s" podCreationTimestamp="2026-01-05 21:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.534408121 +0000 UTC m=+143.906407560" watchObservedRunningTime="2026-01-05 21:54:11.534833793 +0000 UTC m=+143.906833232" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.544169 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" event={"ID":"2772457b-561e-4348-b816-e9d472c4678d","Type":"ContainerStarted","Data":"67b74b7464ae1c7c1af4897ab14f02d31c996b63c3129452a3e4294787052570"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.574606 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.575980 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.075964226 +0000 UTC m=+144.447963665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.576329 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs" event={"ID":"9ff8a90e-11c2-4cf9-b8db-e5c90a552709","Type":"ContainerStarted","Data":"ff6480b7220d55af7104cfe998a2340e8091aa3fb61f258d6df2dabf1e4ea483"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.576377 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs" event={"ID":"9ff8a90e-11c2-4cf9-b8db-e5c90a552709","Type":"ContainerStarted","Data":"7747f8a63fe81622ee3d02ed671aba7dea98a6b86d669aced4b06f1574887d79"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.590664 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" podStartSLOduration=125.590640071 podStartE2EDuration="2m5.590640071s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.588395427 +0000 UTC m=+143.960394866" watchObservedRunningTime="2026-01-05 21:54:11.590640071 +0000 UTC m=+143.962639510" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.595107 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cthth" event={"ID":"9afea6f6-4124-424d-a820-33b15ec35121","Type":"ContainerStarted","Data":"d1a68abad407217e2c09611efbdfe7b7f0e56aba79e9c4a4fe1bf4201423cb18"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.607366 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" event={"ID":"af1871ae-05fe-4597-8bb9-e2525f739922","Type":"ContainerStarted","Data":"eb690f4a33341301ed0fcae36c7dde38b3547499ac71a34421d8e79dd5d97ff2"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.611731 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs" podStartSLOduration=124.611714827 podStartE2EDuration="2m4.611714827s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.6093447 +0000 UTC m=+143.981344139" watchObservedRunningTime="2026-01-05 21:54:11.611714827 +0000 UTC m=+143.983714266" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.628312 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" event={"ID":"c9121e17-1727-4f92-8aa3-e636d63fc1da","Type":"ContainerStarted","Data":"94137d14bb537492c70de47f6898aff2a40c5fce4fa76016dc0d16f479ef08d4"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.628363 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" event={"ID":"c9121e17-1727-4f92-8aa3-e636d63fc1da","Type":"ContainerStarted","Data":"a767c8ee62dc0b169ee88b63c7f84a72762b547f10739f12d559a4aeac729e74"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.630414 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.630474 5034 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-z4stt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.630501 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" podUID="c9121e17-1727-4f92-8aa3-e636d63fc1da" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.659298 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" event={"ID":"648556f6-8682-4cf8-beaa-bdf944bb7f14","Type":"ContainerStarted","Data":"edbb5393ccb0d2ce21015a35a90f176197cfa101590b1109a4d13e1ab028b1d1"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.680375 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.682268 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.182253791 +0000 UTC m=+144.554253230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.683316 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" podStartSLOduration=124.683300531 podStartE2EDuration="2m4.683300531s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.672103354 +0000 UTC m=+144.044102793" watchObservedRunningTime="2026-01-05 21:54:11.683300531 +0000 UTC m=+144.055299970" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.692311 5034 patch_prober.go:28] interesting pod/router-default-5444994796-lhk82 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:54:11 crc kubenswrapper[5034]: [-]has-synced failed: reason withheld Jan 05 21:54:11 crc kubenswrapper[5034]: [+]process-running ok Jan 05 21:54:11 crc kubenswrapper[5034]: healthz check failed Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.692374 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhk82" podUID="563a62ee-1dc1-4dfe-a33c-eb671f426a37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.693147 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" event={"ID":"055944f7-534d-4273-9960-3659d5751c2f","Type":"ContainerStarted","Data":"a754a6e8c3f0372b174d63ec2c95a1aa62d52ce6f59336b70829c11d6d636b3c"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.711828 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" event={"ID":"07c3c207-7bf4-414a-893f-642946b0fe8f","Type":"ContainerStarted","Data":"bb36353dffa46e2c83a52ac897b17f76e4f50a22841ec9b276edd171aa903cd9"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.717822 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-86f42" podStartSLOduration=124.717802157 podStartE2EDuration="2m4.717802157s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.716383426 +0000 UTC m=+144.088382865" watchObservedRunningTime="2026-01-05 21:54:11.717802157 +0000 UTC m=+144.089801596" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.739769 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4j99d" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.755330 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" podStartSLOduration=124.755309387 podStartE2EDuration="2m4.755309387s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.749353559 +0000 UTC m=+144.121352998" watchObservedRunningTime="2026-01-05 21:54:11.755309387 +0000 UTC m=+144.127308826" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.759353 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" event={"ID":"21e89bb2-84f3-407b-966b-b1774d96da98","Type":"ContainerStarted","Data":"0224fa77fe6c113b5804236d76f23f37ceb0ffee097db135f0146f20807a68ca"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.759400 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" event={"ID":"21e89bb2-84f3-407b-966b-b1774d96da98","Type":"ContainerStarted","Data":"b8da30fdd438a12b43f59d4819aa39c7c6b1a2ab656d1b86a44ad35a27325300"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.783342 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.783567 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.283544456 +0000 UTC m=+144.655543895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.783810 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.784961 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.284946235 +0000 UTC m=+144.656945874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.790687 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" event={"ID":"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441","Type":"ContainerStarted","Data":"afc3805e247607df248791ee69ecdf6b88fb3fc4269fd1a7fefce4e599534f3e"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.813364 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" podStartSLOduration=124.813346438 podStartE2EDuration="2m4.813346438s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.81199291 +0000 UTC m=+144.183992349" watchObservedRunningTime="2026-01-05 21:54:11.813346438 +0000 UTC m=+144.185345877" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.825019 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" event={"ID":"8c098c94-1752-4ea9-a292-e650d2b73ab6","Type":"ContainerStarted","Data":"cbc1d0389b3ac3cee670e22788bd8870dfd41d3cb22d50aa1b99c95d3c2a6d82"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.832838 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" event={"ID":"569d48b9-e7f0-4e8f-b0e5-9376475359c4","Type":"ContainerStarted","Data":"1e3f5a51c0c0b076bc10d40d02b8829fc84ed522a01cb364348b3ac76d3a2dbc"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.832885 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" event={"ID":"569d48b9-e7f0-4e8f-b0e5-9376475359c4","Type":"ContainerStarted","Data":"e5307873e3bba71a4d985bc46e235a9f5d859ae3129032f8a5bc9ba7b2745e42"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.833656 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.870629 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" podStartSLOduration=125.87061202699999 podStartE2EDuration="2m5.870612027s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.86857585 +0000 UTC m=+144.240575289" watchObservedRunningTime="2026-01-05 21:54:11.870612027 +0000 UTC m=+144.242611466" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.870871 5034 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-v6d74 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.870916 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" podUID="569d48b9-e7f0-4e8f-b0e5-9376475359c4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.878644 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.878683 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" event={"ID":"5b0d8c6f-30f2-4549-a63b-4c332145ea4e","Type":"ContainerStarted","Data":"7260799269d95678b2a0b6cb36e502e359e2329f5d19845a333f95f9db834f53"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.878705 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" event={"ID":"143b2828-1125-4598-8d3a-44fdc8023b73","Type":"ContainerStarted","Data":"5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.878724 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" event={"ID":"143b2828-1125-4598-8d3a-44fdc8023b73","Type":"ContainerStarted","Data":"b29a3cf1a6638e4cc664970fd3520fe07a3bbf92ceab62d659d984ebe54a1658"} Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.881470 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7fvf" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.884868 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.885170 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.385124388 +0000 UTC m=+144.757123827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.907475 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:11 crc kubenswrapper[5034]: E0105 21:54:11.908720 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.408704765 +0000 UTC m=+144.780704274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.910040 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" podStartSLOduration=124.910026622 podStartE2EDuration="2m4.910026622s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.909363873 +0000 UTC m=+144.281363312" watchObservedRunningTime="2026-01-05 21:54:11.910026622 +0000 UTC m=+144.282026061" Jan 05 21:54:11 crc kubenswrapper[5034]: I0105 21:54:11.942549 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" podStartSLOduration=124.942533841 podStartE2EDuration="2m4.942533841s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:11.942334326 +0000 UTC m=+144.314333775" watchObservedRunningTime="2026-01-05 21:54:11.942533841 +0000 UTC m=+144.314533280" Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.009594 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.009895 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.509873315 +0000 UTC m=+144.881872754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.114087 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.114418 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.61440355 +0000 UTC m=+144.986402989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.215700 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.215979 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.715953802 +0000 UTC m=+145.087953241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.318850 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.319175 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.81916485 +0000 UTC m=+145.191164289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.420446 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.420818 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.920800154 +0000 UTC m=+145.292799593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.421150 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.421424 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:12.921402101 +0000 UTC m=+145.293401540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.424472 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.522138 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.522559 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.022543541 +0000 UTC m=+145.394542980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.623893 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.624331 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.124317519 +0000 UTC m=+145.496316958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.677556 5034 patch_prober.go:28] interesting pod/router-default-5444994796-lhk82 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:54:12 crc kubenswrapper[5034]: [-]has-synced failed: reason withheld Jan 05 21:54:12 crc kubenswrapper[5034]: [+]process-running ok Jan 05 21:54:12 crc kubenswrapper[5034]: healthz check failed Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.677602 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhk82" podUID="563a62ee-1dc1-4dfe-a33c-eb671f426a37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.724836 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.730357 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.230310166 +0000 UTC m=+145.602309615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.827097 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.827404 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.327392141 +0000 UTC m=+145.699391580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.882901 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" event={"ID":"05e846f8-1ebe-4926-8e94-784b94c246c6","Type":"ContainerStarted","Data":"8281172f2f0f3ea80ca7bd54e6e59820cc1985c860f3938f10c7186b9594d6ba"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.882948 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" event={"ID":"05e846f8-1ebe-4926-8e94-784b94c246c6","Type":"ContainerStarted","Data":"611011a0c3e56cdebfaa4737a53c6002b1188643edafd25f9cfabcd45d169d4e"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.886761 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" event={"ID":"8c098c94-1752-4ea9-a292-e650d2b73ab6","Type":"ContainerStarted","Data":"c81162b75842138fae33713eb49749c62010afe4e6d7ce18b1f500163c9cef36"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.893739 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" event={"ID":"af1871ae-05fe-4597-8bb9-e2525f739922","Type":"ContainerStarted","Data":"dac27b9d7bd7b7be02943a041c732a823987efea64dd55370f4933332d5f586b"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.894645 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.903135 5034 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dd7xb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.903202 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" podUID="af1871ae-05fe-4597-8bb9-e2525f739922" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.906773 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hv5s8" event={"ID":"c9db8506-ddd2-448c-b9ce-1d16b1cae0d6","Type":"ContainerStarted","Data":"a3bf0c597c88f91e902f9eb462932ff7e7fb33757a732cd016f0a9672055deaf"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.906907 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.910233 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" event={"ID":"648556f6-8682-4cf8-beaa-bdf944bb7f14","Type":"ContainerStarted","Data":"9f20952e14617e04d1065db9f233f3a70dbf93b1c440b13acb960ea78596aae4"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.913166 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hpkhl" podStartSLOduration=126.913157266 podStartE2EDuration="2m6.913157266s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:12.911443877 +0000 UTC m=+145.283443316" watchObservedRunningTime="2026-01-05 21:54:12.913157266 +0000 UTC m=+145.285156705" Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.917504 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" event={"ID":"5b0d8c6f-30f2-4549-a63b-4c332145ea4e","Type":"ContainerStarted","Data":"70a899a4c8fc949dfbfea7c8d40aa9d78df3041d79118dc6b3328e0d1aabf5e1"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.920191 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kjmjm" event={"ID":"3d4a6c01-2ba1-4765-97dd-ccaa4ef7a441","Type":"ContainerStarted","Data":"91a2fbbc0454fec75bde8e14f63be030c7af23e67823efce5d22e5eb4a4fca5e"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.921587 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" event={"ID":"1021ddb8-2b37-4da6-b560-17d91af60308","Type":"ContainerStarted","Data":"d6be9899ea1f08d08c0fab1c65d26a09ace00466eaa8ba0a8db280c4dfab544f"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.921613 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" event={"ID":"1021ddb8-2b37-4da6-b560-17d91af60308","Type":"ContainerStarted","Data":"817061ba2bc9f46e94f124cef16b3d7eeddb1b108669ed3ae12ec3ad32474c4a"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.921938 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.923231 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" event={"ID":"4c574bd6-8e79-4bc4-ae57-0b6bedfa19b9","Type":"ContainerStarted","Data":"7b09e2f4e54ef3770b6654ffd0f631aff29fbb2d7d287d3d99d56cbf5de83197"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.928485 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.928763 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.428738267 +0000 UTC m=+145.800737706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.929040 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:12 crc kubenswrapper[5034]: E0105 21:54:12.931193 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.431181976 +0000 UTC m=+145.803181415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.934442 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8wssg" event={"ID":"d23b0bf5-8bd5-4891-b101-a278b984dbcf","Type":"ContainerStarted","Data":"c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.948307 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8lxbm" event={"ID":"07c3c207-7bf4-414a-893f-642946b0fe8f","Type":"ContainerStarted","Data":"d2ae14530e50df5c7a14a6eb6741e38c6491355876f1aa6b4ff9be3613eb53b5"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.978169 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2w4bs" event={"ID":"9ff8a90e-11c2-4cf9-b8db-e5c90a552709","Type":"ContainerStarted","Data":"0555ec8661751c6bf5d4d38a5217bea0de0b351a2308758a144445851e1d0429"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.982411 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j75n2" podStartSLOduration=125.982389904 podStartE2EDuration="2m5.982389904s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:12.930662811 +0000 UTC m=+145.302662250" watchObservedRunningTime="2026-01-05 21:54:12.982389904 +0000 UTC m=+145.354389343" Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.985659 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cthth" event={"ID":"9afea6f6-4124-424d-a820-33b15ec35121","Type":"ContainerStarted","Data":"3386678719a1839b6bd14a924fc6401cf320a6a8ba5aea10ca7c2b246b6a3d8e"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.985705 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cthth" event={"ID":"9afea6f6-4124-424d-a820-33b15ec35121","Type":"ContainerStarted","Data":"7e1c21a0ef5eda689e868e6594298d8687feeab8a34e28375269b99298162562"} Jan 05 21:54:12 crc kubenswrapper[5034]: I0105 21:54:12.999703 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" event={"ID":"56dee690-ebb9-4ad9-a51e-ba1f77597d94","Type":"ContainerStarted","Data":"869ed4b96f6339f91f241c2ff2799ee20d37499fa9bf0504fd152653bf79a6aa"} Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.001466 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" podStartSLOduration=126.001451803 podStartE2EDuration="2m6.001451803s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:13.000871916 +0000 UTC m=+145.372871375" watchObservedRunningTime="2026-01-05 21:54:13.001451803 +0000 UTC m=+145.373451252" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.001883 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hv5s8" podStartSLOduration=10.001875425 podStartE2EDuration="10.001875425s" podCreationTimestamp="2026-01-05 21:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:12.982569029 +0000 UTC m=+145.354568468" watchObservedRunningTime="2026-01-05 21:54:13.001875425 +0000 UTC m=+145.373874864" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.016904 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b477s" event={"ID":"2772457b-561e-4348-b816-e9d472c4678d","Type":"ContainerStarted","Data":"309c683f7e5f6f85e67346c6ee66f2b9ddca7dabc0892368c4ecd71203eff45b"} Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.019184 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" event={"ID":"1de23081-5abe-4824-b62b-17a083c43073","Type":"ContainerStarted","Data":"e1f5fb3ab84c33a76dd67e4a2441872039baf993e500e054c2dbb741b631acd5"} Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.019253 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" event={"ID":"1de23081-5abe-4824-b62b-17a083c43073","Type":"ContainerStarted","Data":"d277a50069088d7e592b9a697e0f9a40295541a749e8996d6c1dd7cada16cd3c"} Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.029434 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6d74" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.031607 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.032854 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.53283885 +0000 UTC m=+145.904838289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.081672 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6nv6f" podStartSLOduration=126.0816533 podStartE2EDuration="2m6.0816533s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:13.079309794 +0000 UTC m=+145.451309233" watchObservedRunningTime="2026-01-05 21:54:13.0816533 +0000 UTC m=+145.453652739" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.082651 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4fdnm" podStartSLOduration=126.082643618 podStartE2EDuration="2m6.082643618s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:13.049036428 +0000 UTC m=+145.421035857" watchObservedRunningTime="2026-01-05 21:54:13.082643618 +0000 UTC m=+145.454643057" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.124396 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-cthth" podStartSLOduration=127.124376548 podStartE2EDuration="2m7.124376548s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:13.122168226 +0000 UTC m=+145.494167665" watchObservedRunningTime="2026-01-05 21:54:13.124376548 +0000 UTC m=+145.496375987" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.138302 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.138609 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.638596791 +0000 UTC m=+146.010596230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.175180 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-f94p5" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.179594 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77hq4" podStartSLOduration=126.179578389 podStartE2EDuration="2m6.179578389s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:13.175290068 +0000 UTC m=+145.547289507" watchObservedRunningTime="2026-01-05 21:54:13.179578389 +0000 UTC m=+145.551577838" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.183785 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z4stt" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.239522 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.239748 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.739695409 +0000 UTC m=+146.111694848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.239900 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.240387 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.740375398 +0000 UTC m=+146.112374927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.326785 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8wssg" podStartSLOduration=127.326768531 podStartE2EDuration="2m7.326768531s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:13.320119773 +0000 UTC m=+145.692119232" watchObservedRunningTime="2026-01-05 21:54:13.326768531 +0000 UTC m=+145.698767960" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.347598 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.348043 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.848024182 +0000 UTC m=+146.220023631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.348211 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.348536 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.848526737 +0000 UTC m=+146.220526176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.349286 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hz9rx" podStartSLOduration=126.349271798 podStartE2EDuration="2m6.349271798s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:13.346695105 +0000 UTC m=+145.718694544" watchObservedRunningTime="2026-01-05 21:54:13.349271798 +0000 UTC m=+145.721271227" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.395226 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72g7q" podStartSLOduration=127.395211557 podStartE2EDuration="2m7.395211557s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:13.394028153 +0000 UTC m=+145.766027592" watchObservedRunningTime="2026-01-05 21:54:13.395211557 +0000 UTC m=+145.767210996" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.397059 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" podStartSLOduration=126.397050299 podStartE2EDuration="2m6.397050299s" podCreationTimestamp="2026-01-05 21:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:13.370720364 +0000 UTC m=+145.742719803" watchObservedRunningTime="2026-01-05 21:54:13.397050299 +0000 UTC m=+145.769049738" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.451588 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.451932 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:13.95191348 +0000 UTC m=+146.323912919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.553544 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.553923 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.053907004 +0000 UTC m=+146.425906443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.654651 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.654829 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.154802217 +0000 UTC m=+146.526801656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.676360 5034 patch_prober.go:28] interesting pod/router-default-5444994796-lhk82 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:54:13 crc kubenswrapper[5034]: [-]has-synced failed: reason withheld Jan 05 21:54:13 crc kubenswrapper[5034]: [+]process-running ok Jan 05 21:54:13 crc kubenswrapper[5034]: healthz check failed Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.676430 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhk82" podUID="563a62ee-1dc1-4dfe-a33c-eb671f426a37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.756432 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.756839 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.256822892 +0000 UTC m=+146.628822331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.856913 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.857063 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.357038095 +0000 UTC m=+146.729037534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.857177 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.857508 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.357500298 +0000 UTC m=+146.729499737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.945533 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rwss9"] Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.946450 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.949018 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.957709 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.957884 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.457869356 +0000 UTC m=+146.829868795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.957914 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:13 crc kubenswrapper[5034]: E0105 21:54:13.958174 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.458160655 +0000 UTC m=+146.830160094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:13 crc kubenswrapper[5034]: I0105 21:54:13.968835 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rwss9"] Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.028099 5034 generic.go:334] "Generic (PLEG): container finished" podID="21e89bb2-84f3-407b-966b-b1774d96da98" containerID="0224fa77fe6c113b5804236d76f23f37ceb0ffee097db135f0146f20807a68ca" exitCode=0 Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.028164 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" event={"ID":"21e89bb2-84f3-407b-966b-b1774d96da98","Type":"ContainerDied","Data":"0224fa77fe6c113b5804236d76f23f37ceb0ffee097db135f0146f20807a68ca"} Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.031270 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-47rdk" event={"ID":"9f9df170-8b91-4370-8dce-46e91312904c","Type":"ContainerStarted","Data":"fd7e3b1e892cfad29ff6fd9b3fe424f6610b5d815c1a00e9d419777cbfc60ee0"} Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.031319 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-47rdk" event={"ID":"9f9df170-8b91-4370-8dce-46e91312904c","Type":"ContainerStarted","Data":"83098d08bebc68061a647bc5b10a0deaf2ea2dac5a72d74ac24f82b5a983acee"} Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.032010 5034 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dd7xb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.032056 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" podUID="af1871ae-05fe-4597-8bb9-e2525f739922" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.048359 5034 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.059453 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.059870 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-catalog-content\") pod \"community-operators-rwss9\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.059961 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9dbc\" (UniqueName: \"kubernetes.io/projected/4b263441-0124-45fe-8cc0-14aa272246c3-kube-api-access-z9dbc\") pod \"community-operators-rwss9\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.059992 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-utilities\") pod \"community-operators-rwss9\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:14 crc kubenswrapper[5034]: E0105 21:54:14.060132 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.560114658 +0000 UTC m=+146.932114097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.148756 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vlt9r"] Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.149687 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.152730 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.161566 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9dbc\" (UniqueName: \"kubernetes.io/projected/4b263441-0124-45fe-8cc0-14aa272246c3-kube-api-access-z9dbc\") pod \"community-operators-rwss9\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.161694 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-utilities\") pod \"community-operators-rwss9\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.162970 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.163957 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-utilities\") pod \"community-operators-rwss9\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:14 crc kubenswrapper[5034]: E0105 21:54:14.168270 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.668256285 +0000 UTC m=+147.040255724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.170169 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-catalog-content\") pod \"community-operators-rwss9\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.171654 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-catalog-content\") pod \"community-operators-rwss9\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.191062 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9dbc\" (UniqueName: \"kubernetes.io/projected/4b263441-0124-45fe-8cc0-14aa272246c3-kube-api-access-z9dbc\") pod \"community-operators-rwss9\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.201634 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlt9r"] Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.272299 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:14 crc kubenswrapper[5034]: E0105 21:54:14.272542 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.772511783 +0000 UTC m=+147.144511232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.272598 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.272648 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-catalog-content\") pod \"certified-operators-vlt9r\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.272670 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-utilities\") pod \"certified-operators-vlt9r\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.272869 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4hq\" (UniqueName: \"kubernetes.io/projected/58104f59-4ae4-4e18-aa6a-6762a589e921-kube-api-access-mn4hq\") pod \"certified-operators-vlt9r\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: E0105 21:54:14.273091 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.773061169 +0000 UTC m=+147.145060608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.283208 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.357847 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-84nh6"] Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.359024 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.374407 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:14 crc kubenswrapper[5034]: E0105 21:54:14.374599 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.874574559 +0000 UTC m=+147.246573998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.374748 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn4hq\" (UniqueName: \"kubernetes.io/projected/58104f59-4ae4-4e18-aa6a-6762a589e921-kube-api-access-mn4hq\") pod \"certified-operators-vlt9r\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.374876 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.374949 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-catalog-content\") pod \"certified-operators-vlt9r\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.374977 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-utilities\") pod \"certified-operators-vlt9r\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: E0105 21:54:14.375577 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.875557857 +0000 UTC m=+147.247557366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.375730 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-utilities\") pod \"certified-operators-vlt9r\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.375805 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-catalog-content\") pod \"certified-operators-vlt9r\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.405726 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84nh6"] Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.407000 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn4hq\" (UniqueName: \"kubernetes.io/projected/58104f59-4ae4-4e18-aa6a-6762a589e921-kube-api-access-mn4hq\") pod \"certified-operators-vlt9r\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.476199 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:14 crc kubenswrapper[5034]: E0105 21:54:14.476374 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.976345107 +0000 UTC m=+147.348344546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.476441 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb42j\" (UniqueName: \"kubernetes.io/projected/66409272-43c4-46a0-8a57-c34201f689f2-kube-api-access-bb42j\") pod \"community-operators-84nh6\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.476479 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.476501 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-catalog-content\") pod \"community-operators-84nh6\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.476598 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-utilities\") pod \"community-operators-84nh6\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: E0105 21:54:14.476775 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:14.976767519 +0000 UTC m=+147.348766958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.524980 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.562751 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v7qh7"] Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.578764 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.579047 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb42j\" (UniqueName: \"kubernetes.io/projected/66409272-43c4-46a0-8a57-c34201f689f2-kube-api-access-bb42j\") pod \"community-operators-84nh6\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.579120 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-catalog-content\") pod \"community-operators-84nh6\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.579174 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-utilities\") pod \"community-operators-84nh6\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: E0105 21:54:14.579228 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:54:15.079199315 +0000 UTC m=+147.451198754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.579912 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-utilities\") pod \"community-operators-84nh6\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.580007 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-catalog-content\") pod \"community-operators-84nh6\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.580114 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.616685 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7qh7"] Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.624853 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb42j\" (UniqueName: \"kubernetes.io/projected/66409272-43c4-46a0-8a57-c34201f689f2-kube-api-access-bb42j\") pod \"community-operators-84nh6\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.682376 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-utilities\") pod \"certified-operators-v7qh7\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.687109 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-catalog-content\") pod \"certified-operators-v7qh7\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.687149 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxjt5\" (UniqueName: \"kubernetes.io/projected/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-kube-api-access-rxjt5\") pod \"certified-operators-v7qh7\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.687315 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:14 crc kubenswrapper[5034]: E0105 21:54:14.687764 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:54:15.187749885 +0000 UTC m=+147.559749324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nstll" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.683425 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.689359 5034 patch_prober.go:28] interesting pod/router-default-5444994796-lhk82 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:54:14 crc kubenswrapper[5034]: [-]has-synced failed: reason withheld Jan 05 21:54:14 crc kubenswrapper[5034]: [+]process-running ok Jan 05 21:54:14 crc kubenswrapper[5034]: healthz check failed Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.689445 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhk82" podUID="563a62ee-1dc1-4dfe-a33c-eb671f426a37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.703019 5034 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-05T21:54:14.048390446Z","Handler":null,"Name":""} Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.716816 5034 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.716870 5034 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.788529 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.788801 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.788830 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.788878 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.788905 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.788938 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-utilities\") pod \"certified-operators-v7qh7\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.788960 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxjt5\" (UniqueName: \"kubernetes.io/projected/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-kube-api-access-rxjt5\") pod \"certified-operators-v7qh7\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.788975 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-catalog-content\") pod \"certified-operators-v7qh7\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.789388 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-catalog-content\") pod \"certified-operators-v7qh7\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.790585 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-utilities\") pod \"certified-operators-v7qh7\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.794694 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.796072 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.808733 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.810335 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.817565 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxjt5\" (UniqueName: \"kubernetes.io/projected/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-kube-api-access-rxjt5\") pod \"certified-operators-v7qh7\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.846405 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.848029 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.856255 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.861784 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vpvt5" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.865173 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.883014 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.894923 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.937194 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.937239 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:14 crc kubenswrapper[5034]: I0105 21:54:14.984167 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.004044 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rwss9"] Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.038562 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nstll\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.154555 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-47rdk" event={"ID":"9f9df170-8b91-4370-8dce-46e91312904c","Type":"ContainerStarted","Data":"079d670abd608530bba965b2d20567949d854e7e856c90aa6c5d33225a6c1037"} Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.164845 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.185574 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-47rdk" podStartSLOduration=12.185552741 podStartE2EDuration="12.185552741s" podCreationTimestamp="2026-01-05 21:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:15.185012055 +0000 UTC m=+147.557011514" watchObservedRunningTime="2026-01-05 21:54:15.185552741 +0000 UTC m=+147.557552180" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.212141 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.354265 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84nh6"] Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.407898 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlt9r"] Jan 05 21:54:15 crc kubenswrapper[5034]: W0105 21:54:15.418497 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66409272_43c4_46a0_8a57_c34201f689f2.slice/crio-0b15f0252fa1cfc2a5ba072512a62ff287509abd0b14daf50b34694984f19a3e WatchSource:0}: Error finding container 0b15f0252fa1cfc2a5ba072512a62ff287509abd0b14daf50b34694984f19a3e: Status 404 returned error can't find the container with id 0b15f0252fa1cfc2a5ba072512a62ff287509abd0b14daf50b34694984f19a3e Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.683363 5034 patch_prober.go:28] interesting pod/router-default-5444994796-lhk82 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:54:15 crc kubenswrapper[5034]: [-]has-synced failed: reason withheld Jan 05 21:54:15 crc kubenswrapper[5034]: [+]process-running ok Jan 05 21:54:15 crc kubenswrapper[5034]: healthz check failed Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.683421 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhk82" podUID="563a62ee-1dc1-4dfe-a33c-eb671f426a37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.781779 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.783230 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.792982 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.793301 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.796330 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.805320 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7qh7"] Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.848753 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.920780 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adb30076-9175-408d-8271-26b3b75a66be-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adb30076-9175-408d-8271-26b3b75a66be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.920918 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adb30076-9175-408d-8271-26b3b75a66be-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adb30076-9175-408d-8271-26b3b75a66be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.947711 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kv6pz"] Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.953227 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.957043 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 05 21:54:15 crc kubenswrapper[5034]: I0105 21:54:15.977246 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv6pz"] Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.006096 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.023126 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adb30076-9175-408d-8271-26b3b75a66be-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adb30076-9175-408d-8271-26b3b75a66be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.023192 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adb30076-9175-408d-8271-26b3b75a66be-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adb30076-9175-408d-8271-26b3b75a66be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.023273 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adb30076-9175-408d-8271-26b3b75a66be-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adb30076-9175-408d-8271-26b3b75a66be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.042021 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adb30076-9175-408d-8271-26b3b75a66be-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adb30076-9175-408d-8271-26b3b75a66be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:54:16 crc kubenswrapper[5034]: W0105 21:54:16.108420 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-48bf4d77fa0741b11d3beaa6be695d43822f718edd054283a5f61db6f98a4a94 WatchSource:0}: Error finding container 48bf4d77fa0741b11d3beaa6be695d43822f718edd054283a5f61db6f98a4a94: Status 404 returned error can't find the container with id 48bf4d77fa0741b11d3beaa6be695d43822f718edd054283a5f61db6f98a4a94 Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.120301 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.124428 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gff2t\" (UniqueName: \"kubernetes.io/projected/21e89bb2-84f3-407b-966b-b1774d96da98-kube-api-access-gff2t\") pod \"21e89bb2-84f3-407b-966b-b1774d96da98\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.124515 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21e89bb2-84f3-407b-966b-b1774d96da98-secret-volume\") pod \"21e89bb2-84f3-407b-966b-b1774d96da98\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.124571 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21e89bb2-84f3-407b-966b-b1774d96da98-config-volume\") pod \"21e89bb2-84f3-407b-966b-b1774d96da98\" (UID: \"21e89bb2-84f3-407b-966b-b1774d96da98\") " Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.124775 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br6nm\" (UniqueName: \"kubernetes.io/projected/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-kube-api-access-br6nm\") pod \"redhat-marketplace-kv6pz\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.124807 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-catalog-content\") pod \"redhat-marketplace-kv6pz\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.124876 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-utilities\") pod \"redhat-marketplace-kv6pz\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.125566 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e89bb2-84f3-407b-966b-b1774d96da98-config-volume" (OuterVolumeSpecName: "config-volume") pod "21e89bb2-84f3-407b-966b-b1774d96da98" (UID: "21e89bb2-84f3-407b-966b-b1774d96da98"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.128222 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e89bb2-84f3-407b-966b-b1774d96da98-kube-api-access-gff2t" (OuterVolumeSpecName: "kube-api-access-gff2t") pod "21e89bb2-84f3-407b-966b-b1774d96da98" (UID: "21e89bb2-84f3-407b-966b-b1774d96da98"). InnerVolumeSpecName "kube-api-access-gff2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.129861 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e89bb2-84f3-407b-966b-b1774d96da98-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "21e89bb2-84f3-407b-966b-b1774d96da98" (UID: "21e89bb2-84f3-407b-966b-b1774d96da98"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.156431 5034 patch_prober.go:28] interesting pod/downloads-7954f5f757-xz2sp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.156477 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xz2sp" podUID="f4747c26-8a6b-4d60-ae91-36f9d7b86f14" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.156434 5034 patch_prober.go:28] interesting pod/downloads-7954f5f757-xz2sp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.156794 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xz2sp" podUID="f4747c26-8a6b-4d60-ae91-36f9d7b86f14" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.168106 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"48bf4d77fa0741b11d3beaa6be695d43822f718edd054283a5f61db6f98a4a94"} Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.170937 5034 generic.go:334] "Generic (PLEG): container finished" podID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerID="42f9ebed98569d7f233fb2ce5841061854f572fff3a2e9bdb7dbf1edd2aea22b" exitCode=0 Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.171334 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7qh7" event={"ID":"d2634a68-a5ff-4370-bdf4-e41065a0b8ef","Type":"ContainerDied","Data":"42f9ebed98569d7f233fb2ce5841061854f572fff3a2e9bdb7dbf1edd2aea22b"} Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.171478 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7qh7" event={"ID":"d2634a68-a5ff-4370-bdf4-e41065a0b8ef","Type":"ContainerStarted","Data":"54a58820952bb89a6ee12f5df1a035118e51d64d6e8a40bd3cf6409ec6b897ce"} Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.181500 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nstll"] Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.181561 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.193089 5034 generic.go:334] "Generic (PLEG): container finished" podID="4b263441-0124-45fe-8cc0-14aa272246c3" containerID="9ae5a40e9bd330f99a6d6c54f3fe36ff41e290d8ad029945c17c8126d960a638" exitCode=0 Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.193159 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwss9" event={"ID":"4b263441-0124-45fe-8cc0-14aa272246c3","Type":"ContainerDied","Data":"9ae5a40e9bd330f99a6d6c54f3fe36ff41e290d8ad029945c17c8126d960a638"} Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.193191 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwss9" event={"ID":"4b263441-0124-45fe-8cc0-14aa272246c3","Type":"ContainerStarted","Data":"04958c24a98236b10b9e62fba4066fae73484ac8aae8ef6235f9fe734c3ef0bd"} Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.206214 5034 generic.go:334] "Generic (PLEG): container finished" podID="66409272-43c4-46a0-8a57-c34201f689f2" containerID="d58549b14928f3a4bc931794203eb7b35c751cf077e9d7201ba494deb0cbfa10" exitCode=0 Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.206512 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84nh6" event={"ID":"66409272-43c4-46a0-8a57-c34201f689f2","Type":"ContainerDied","Data":"d58549b14928f3a4bc931794203eb7b35c751cf077e9d7201ba494deb0cbfa10"} Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.206572 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84nh6" event={"ID":"66409272-43c4-46a0-8a57-c34201f689f2","Type":"ContainerStarted","Data":"0b15f0252fa1cfc2a5ba072512a62ff287509abd0b14daf50b34694984f19a3e"} Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.215361 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.215389 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh" event={"ID":"21e89bb2-84f3-407b-966b-b1774d96da98","Type":"ContainerDied","Data":"b8da30fdd438a12b43f59d4819aa39c7c6b1a2ab656d1b86a44ad35a27325300"} Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.215422 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8da30fdd438a12b43f59d4819aa39c7c6b1a2ab656d1b86a44ad35a27325300" Jan 05 21:54:16 crc kubenswrapper[5034]: W0105 21:54:16.215755 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5367619c_e54b_4d73_9c9e_cf73bbe8dbed.slice/crio-8dcf67bed10171a7b7fa2fbca9cbc28d4e68cf31a770c5fc2d81ee7f14530f63 WatchSource:0}: Error finding container 8dcf67bed10171a7b7fa2fbca9cbc28d4e68cf31a770c5fc2d81ee7f14530f63: Status 404 returned error can't find the container with id 8dcf67bed10171a7b7fa2fbca9cbc28d4e68cf31a770c5fc2d81ee7f14530f63 Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.216758 5034 generic.go:334] "Generic (PLEG): container finished" podID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerID="edf41e654213713ba3870d3500d2f14ff88ff3513f1f92e55836eb017ac48f00" exitCode=0 Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.218444 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlt9r" event={"ID":"58104f59-4ae4-4e18-aa6a-6762a589e921","Type":"ContainerDied","Data":"edf41e654213713ba3870d3500d2f14ff88ff3513f1f92e55836eb017ac48f00"} Jan 05 21:54:16 crc kubenswrapper[5034]: W0105 21:54:16.218458 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-6e37a45b322a6faa7b98321184569613e2de897cd492011d4685706655d1f21d WatchSource:0}: Error finding container 6e37a45b322a6faa7b98321184569613e2de897cd492011d4685706655d1f21d: Status 404 returned error can't find the container with id 6e37a45b322a6faa7b98321184569613e2de897cd492011d4685706655d1f21d Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.218475 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlt9r" event={"ID":"58104f59-4ae4-4e18-aa6a-6762a589e921","Type":"ContainerStarted","Data":"5827c40c95ab0e5ea9a75f42f757f46c01eb31b8ddb439c793cc05f761c7a7f7"} Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.229280 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br6nm\" (UniqueName: \"kubernetes.io/projected/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-kube-api-access-br6nm\") pod \"redhat-marketplace-kv6pz\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.229329 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-catalog-content\") pod \"redhat-marketplace-kv6pz\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.229361 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-utilities\") pod \"redhat-marketplace-kv6pz\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.229506 5034 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21e89bb2-84f3-407b-966b-b1774d96da98-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.229519 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gff2t\" (UniqueName: \"kubernetes.io/projected/21e89bb2-84f3-407b-966b-b1774d96da98-kube-api-access-gff2t\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.229531 5034 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21e89bb2-84f3-407b-966b-b1774d96da98-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.230262 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-utilities\") pod \"redhat-marketplace-kv6pz\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.230482 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-catalog-content\") pod \"redhat-marketplace-kv6pz\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:16 crc kubenswrapper[5034]: W0105 21:54:16.283227 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-c2477886c68df87a1092c4909132d2079950258966236a888043f986aceedef1 WatchSource:0}: Error finding container c2477886c68df87a1092c4909132d2079950258966236a888043f986aceedef1: Status 404 returned error can't find the container with id c2477886c68df87a1092c4909132d2079950258966236a888043f986aceedef1 Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.283869 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br6nm\" (UniqueName: \"kubernetes.io/projected/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-kube-api-access-br6nm\") pod \"redhat-marketplace-kv6pz\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.353110 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.381318 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-twqvz"] Jan 05 21:54:16 crc kubenswrapper[5034]: E0105 21:54:16.381650 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e89bb2-84f3-407b-966b-b1774d96da98" containerName="collect-profiles" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.381669 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e89bb2-84f3-407b-966b-b1774d96da98" containerName="collect-profiles" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.381791 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e89bb2-84f3-407b-966b-b1774d96da98" containerName="collect-profiles" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.382726 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.392138 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twqvz"] Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.505496 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.531714 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znkg4\" (UniqueName: \"kubernetes.io/projected/31c821c2-4c3c-456b-b280-f20c523587ea-kube-api-access-znkg4\") pod \"redhat-marketplace-twqvz\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.532032 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-catalog-content\") pod \"redhat-marketplace-twqvz\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.532055 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-utilities\") pod \"redhat-marketplace-twqvz\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.566873 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.566906 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.577281 5034 patch_prober.go:28] interesting pod/console-f9d7485db-8wssg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.577331 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8wssg" podUID="d23b0bf5-8bd5-4891-b101-a278b984dbcf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.632898 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-catalog-content\") pod \"redhat-marketplace-twqvz\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.632960 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-utilities\") pod \"redhat-marketplace-twqvz\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.633062 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znkg4\" (UniqueName: \"kubernetes.io/projected/31c821c2-4c3c-456b-b280-f20c523587ea-kube-api-access-znkg4\") pod \"redhat-marketplace-twqvz\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.634030 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-catalog-content\") pod \"redhat-marketplace-twqvz\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.634312 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-utilities\") pod \"redhat-marketplace-twqvz\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.656647 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znkg4\" (UniqueName: \"kubernetes.io/projected/31c821c2-4c3c-456b-b280-f20c523587ea-kube-api-access-znkg4\") pod \"redhat-marketplace-twqvz\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.673746 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.709522 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.737582 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:54:16 crc kubenswrapper[5034]: I0105 21:54:16.808450 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv6pz"] Jan 05 21:54:16 crc kubenswrapper[5034]: W0105 21:54:16.812537 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8c12ec_ccc2_4e78_8a00_0bc3b167dcd7.slice/crio-9281c9eec3198cfad1bdf8dc8d991920db916df66dc759916c612aac457d5574 WatchSource:0}: Error finding container 9281c9eec3198cfad1bdf8dc8d991920db916df66dc759916c612aac457d5574: Status 404 returned error can't find the container with id 9281c9eec3198cfad1bdf8dc8d991920db916df66dc759916c612aac457d5574 Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.142893 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l8mqz"] Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.144726 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.146559 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.156372 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8mqz"] Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.233513 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adb30076-9175-408d-8271-26b3b75a66be","Type":"ContainerStarted","Data":"1288c6b7deb8e922238002b01aa3f1234ad7504e1dc33ed0444b6fcd08fd49b9"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.233550 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adb30076-9175-408d-8271-26b3b75a66be","Type":"ContainerStarted","Data":"fba6f82b9e506d4fd0a4a4b434f106b17829402bc6a2885cdf2b25a3aa0ad7da"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.240144 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f81e53781cc69a4e4741baa547cc7016f43592250b8b78012f5d575e8fb99551"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.240205 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6e37a45b322a6faa7b98321184569613e2de897cd492011d4685706655d1f21d"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.248622 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" event={"ID":"5367619c-e54b-4d73-9c9e-cf73bbe8dbed","Type":"ContainerStarted","Data":"205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.248688 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" event={"ID":"5367619c-e54b-4d73-9c9e-cf73bbe8dbed","Type":"ContainerStarted","Data":"8dcf67bed10171a7b7fa2fbca9cbc28d4e68cf31a770c5fc2d81ee7f14530f63"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.254588 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-utilities\") pod \"redhat-operators-l8mqz\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.254668 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-catalog-content\") pod \"redhat-operators-l8mqz\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.248772 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.254751 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqz9l\" (UniqueName: \"kubernetes.io/projected/0aa39adf-fc5d-44bf-a491-0ff564bd864c-kube-api-access-lqz9l\") pod \"redhat-operators-l8mqz\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.257794 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.257760894 podStartE2EDuration="2.257760894s" podCreationTimestamp="2026-01-05 21:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:17.249161601 +0000 UTC m=+149.621161050" watchObservedRunningTime="2026-01-05 21:54:17.257760894 +0000 UTC m=+149.629760353" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.263140 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5db94a21fcc40c4c266b35ebb9db4a0abaffa0a9072e361a5809ba53136726e5"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.263175 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c2477886c68df87a1092c4909132d2079950258966236a888043f986aceedef1"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.263642 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.268325 5034 generic.go:334] "Generic (PLEG): container finished" podID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerID="629a86b63be6144d32d28e0f2452ad6823b99e81b6794bda0b1215d3a7c2d91b" exitCode=0 Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.268395 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv6pz" event={"ID":"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7","Type":"ContainerDied","Data":"629a86b63be6144d32d28e0f2452ad6823b99e81b6794bda0b1215d3a7c2d91b"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.268419 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv6pz" event={"ID":"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7","Type":"ContainerStarted","Data":"9281c9eec3198cfad1bdf8dc8d991920db916df66dc759916c612aac457d5574"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.275523 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1c68b179d17d42517936fa1840aeb64a0014060ee36f427f709a24dd94e58882"} Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.282563 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-lhk82" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.291484 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" podStartSLOduration=131.291461287 podStartE2EDuration="2m11.291461287s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:17.268719914 +0000 UTC m=+149.640719373" watchObservedRunningTime="2026-01-05 21:54:17.291461287 +0000 UTC m=+149.663460726" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.301395 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twqvz"] Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.357418 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-utilities\") pod \"redhat-operators-l8mqz\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.357519 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-catalog-content\") pod \"redhat-operators-l8mqz\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.357611 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqz9l\" (UniqueName: \"kubernetes.io/projected/0aa39adf-fc5d-44bf-a491-0ff564bd864c-kube-api-access-lqz9l\") pod \"redhat-operators-l8mqz\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.359549 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-catalog-content\") pod \"redhat-operators-l8mqz\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.360936 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-utilities\") pod \"redhat-operators-l8mqz\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.443771 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqz9l\" (UniqueName: \"kubernetes.io/projected/0aa39adf-fc5d-44bf-a491-0ff564bd864c-kube-api-access-lqz9l\") pod \"redhat-operators-l8mqz\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.515311 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.557360 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q64th"] Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.558714 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.571805 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q64th"] Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.662086 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-catalog-content\") pod \"redhat-operators-q64th\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.662485 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-utilities\") pod \"redhat-operators-q64th\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.662555 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcvp4\" (UniqueName: \"kubernetes.io/projected/cbe086f6-53ed-4d3a-b442-74671a78f935-kube-api-access-mcvp4\") pod \"redhat-operators-q64th\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.764300 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-utilities\") pod \"redhat-operators-q64th\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.764659 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcvp4\" (UniqueName: \"kubernetes.io/projected/cbe086f6-53ed-4d3a-b442-74671a78f935-kube-api-access-mcvp4\") pod \"redhat-operators-q64th\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.764697 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-catalog-content\") pod \"redhat-operators-q64th\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.765675 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-catalog-content\") pod \"redhat-operators-q64th\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.765935 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-utilities\") pod \"redhat-operators-q64th\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.788669 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcvp4\" (UniqueName: \"kubernetes.io/projected/cbe086f6-53ed-4d3a-b442-74671a78f935-kube-api-access-mcvp4\") pod \"redhat-operators-q64th\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:17 crc kubenswrapper[5034]: I0105 21:54:17.914165 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:54:18 crc kubenswrapper[5034]: I0105 21:54:18.168120 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8mqz"] Jan 05 21:54:18 crc kubenswrapper[5034]: W0105 21:54:18.175178 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aa39adf_fc5d_44bf_a491_0ff564bd864c.slice/crio-7f2a731841f27977d0b8e718fcebf8cb8b8f1e174faa70246157ffe4a074c2c1 WatchSource:0}: Error finding container 7f2a731841f27977d0b8e718fcebf8cb8b8f1e174faa70246157ffe4a074c2c1: Status 404 returned error can't find the container with id 7f2a731841f27977d0b8e718fcebf8cb8b8f1e174faa70246157ffe4a074c2c1 Jan 05 21:54:18 crc kubenswrapper[5034]: I0105 21:54:18.340666 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8mqz" event={"ID":"0aa39adf-fc5d-44bf-a491-0ff564bd864c","Type":"ContainerStarted","Data":"7f2a731841f27977d0b8e718fcebf8cb8b8f1e174faa70246157ffe4a074c2c1"} Jan 05 21:54:18 crc kubenswrapper[5034]: I0105 21:54:18.351740 5034 generic.go:334] "Generic (PLEG): container finished" podID="adb30076-9175-408d-8271-26b3b75a66be" containerID="1288c6b7deb8e922238002b01aa3f1234ad7504e1dc33ed0444b6fcd08fd49b9" exitCode=0 Jan 05 21:54:18 crc kubenswrapper[5034]: I0105 21:54:18.352599 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adb30076-9175-408d-8271-26b3b75a66be","Type":"ContainerDied","Data":"1288c6b7deb8e922238002b01aa3f1234ad7504e1dc33ed0444b6fcd08fd49b9"} Jan 05 21:54:18 crc kubenswrapper[5034]: I0105 21:54:18.357051 5034 generic.go:334] "Generic (PLEG): container finished" podID="31c821c2-4c3c-456b-b280-f20c523587ea" containerID="4f912479640ff978150230fc1b518a2ddbb63f585df7a9d0a1559f077ac156c0" exitCode=0 Jan 05 21:54:18 crc kubenswrapper[5034]: I0105 21:54:18.359166 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twqvz" event={"ID":"31c821c2-4c3c-456b-b280-f20c523587ea","Type":"ContainerDied","Data":"4f912479640ff978150230fc1b518a2ddbb63f585df7a9d0a1559f077ac156c0"} Jan 05 21:54:18 crc kubenswrapper[5034]: I0105 21:54:18.359219 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twqvz" event={"ID":"31c821c2-4c3c-456b-b280-f20c523587ea","Type":"ContainerStarted","Data":"9918dfd95b578c97da7910efb098bc083c81825fce56ebacb8594ac5ce76189c"} Jan 05 21:54:18 crc kubenswrapper[5034]: I0105 21:54:18.393757 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q64th"] Jan 05 21:54:18 crc kubenswrapper[5034]: W0105 21:54:18.433754 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbe086f6_53ed_4d3a_b442_74671a78f935.slice/crio-282290bf2af7fe45a2cd6be6ef0e7a0eb3e72557966e6fdf89e2de668435352b WatchSource:0}: Error finding container 282290bf2af7fe45a2cd6be6ef0e7a0eb3e72557966e6fdf89e2de668435352b: Status 404 returned error can't find the container with id 282290bf2af7fe45a2cd6be6ef0e7a0eb3e72557966e6fdf89e2de668435352b Jan 05 21:54:18 crc kubenswrapper[5034]: E0105 21:54:18.863478 5034 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbe086f6_53ed_4d3a_b442_74671a78f935.slice/crio-c45c791efa67e4818333eda9301c501f4e42128dd2ac70ae9eb8190bc49a9a11.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbe086f6_53ed_4d3a_b442_74671a78f935.slice/crio-conmon-c45c791efa67e4818333eda9301c501f4e42128dd2ac70ae9eb8190bc49a9a11.scope\": RecentStats: unable to find data in memory cache]" Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.368243 5034 generic.go:334] "Generic (PLEG): container finished" podID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerID="3ea8d1643fa69aeeaad85378bab2d062e5df98b09b5a62c7e4daa26137b9f984" exitCode=0 Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.368298 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8mqz" event={"ID":"0aa39adf-fc5d-44bf-a491-0ff564bd864c","Type":"ContainerDied","Data":"3ea8d1643fa69aeeaad85378bab2d062e5df98b09b5a62c7e4daa26137b9f984"} Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.371211 5034 generic.go:334] "Generic (PLEG): container finished" podID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerID="c45c791efa67e4818333eda9301c501f4e42128dd2ac70ae9eb8190bc49a9a11" exitCode=0 Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.372199 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q64th" event={"ID":"cbe086f6-53ed-4d3a-b442-74671a78f935","Type":"ContainerDied","Data":"c45c791efa67e4818333eda9301c501f4e42128dd2ac70ae9eb8190bc49a9a11"} Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.372290 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q64th" event={"ID":"cbe086f6-53ed-4d3a-b442-74671a78f935","Type":"ContainerStarted","Data":"282290bf2af7fe45a2cd6be6ef0e7a0eb3e72557966e6fdf89e2de668435352b"} Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.744549 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.902549 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adb30076-9175-408d-8271-26b3b75a66be-kubelet-dir\") pod \"adb30076-9175-408d-8271-26b3b75a66be\" (UID: \"adb30076-9175-408d-8271-26b3b75a66be\") " Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.902628 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adb30076-9175-408d-8271-26b3b75a66be-kube-api-access\") pod \"adb30076-9175-408d-8271-26b3b75a66be\" (UID: \"adb30076-9175-408d-8271-26b3b75a66be\") " Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.902698 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adb30076-9175-408d-8271-26b3b75a66be-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "adb30076-9175-408d-8271-26b3b75a66be" (UID: "adb30076-9175-408d-8271-26b3b75a66be"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.902895 5034 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adb30076-9175-408d-8271-26b3b75a66be-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:19 crc kubenswrapper[5034]: I0105 21:54:19.911189 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb30076-9175-408d-8271-26b3b75a66be-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "adb30076-9175-408d-8271-26b3b75a66be" (UID: "adb30076-9175-408d-8271-26b3b75a66be"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:20 crc kubenswrapper[5034]: I0105 21:54:20.004640 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adb30076-9175-408d-8271-26b3b75a66be-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:20 crc kubenswrapper[5034]: I0105 21:54:20.384467 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adb30076-9175-408d-8271-26b3b75a66be","Type":"ContainerDied","Data":"fba6f82b9e506d4fd0a4a4b434f106b17829402bc6a2885cdf2b25a3aa0ad7da"} Jan 05 21:54:20 crc kubenswrapper[5034]: I0105 21:54:20.384507 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fba6f82b9e506d4fd0a4a4b434f106b17829402bc6a2885cdf2b25a3aa0ad7da" Jan 05 21:54:20 crc kubenswrapper[5034]: I0105 21:54:20.384709 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:54:20 crc kubenswrapper[5034]: I0105 21:54:20.468633 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:54:20 crc kubenswrapper[5034]: I0105 21:54:20.468878 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:54:21 crc kubenswrapper[5034]: I0105 21:54:21.207237 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hv5s8" Jan 05 21:54:21 crc kubenswrapper[5034]: I0105 21:54:21.783912 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 05 21:54:21 crc kubenswrapper[5034]: E0105 21:54:21.784533 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb30076-9175-408d-8271-26b3b75a66be" containerName="pruner" Jan 05 21:54:21 crc kubenswrapper[5034]: I0105 21:54:21.784548 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb30076-9175-408d-8271-26b3b75a66be" containerName="pruner" Jan 05 21:54:21 crc kubenswrapper[5034]: I0105 21:54:21.784663 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb30076-9175-408d-8271-26b3b75a66be" containerName="pruner" Jan 05 21:54:21 crc kubenswrapper[5034]: I0105 21:54:21.785065 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:54:21 crc kubenswrapper[5034]: I0105 21:54:21.788173 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 05 21:54:21 crc kubenswrapper[5034]: I0105 21:54:21.862917 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 05 21:54:21 crc kubenswrapper[5034]: I0105 21:54:21.864264 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 05 21:54:21 crc kubenswrapper[5034]: I0105 21:54:21.967997 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a275aa4e-b275-4951-bc48-f6989fd0ecbc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:54:21 crc kubenswrapper[5034]: I0105 21:54:21.968058 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a275aa4e-b275-4951-bc48-f6989fd0ecbc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:54:22 crc kubenswrapper[5034]: I0105 21:54:22.069817 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a275aa4e-b275-4951-bc48-f6989fd0ecbc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:54:22 crc kubenswrapper[5034]: I0105 21:54:22.069876 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a275aa4e-b275-4951-bc48-f6989fd0ecbc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:54:22 crc kubenswrapper[5034]: I0105 21:54:22.070222 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a275aa4e-b275-4951-bc48-f6989fd0ecbc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:54:22 crc kubenswrapper[5034]: I0105 21:54:22.106344 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a275aa4e-b275-4951-bc48-f6989fd0ecbc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:54:22 crc kubenswrapper[5034]: I0105 21:54:22.187436 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:54:22 crc kubenswrapper[5034]: I0105 21:54:22.684698 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 05 21:54:23 crc kubenswrapper[5034]: I0105 21:54:23.461911 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a275aa4e-b275-4951-bc48-f6989fd0ecbc","Type":"ContainerStarted","Data":"cec4b0673bec98922a69850090ca15b10fee45dc7c50d48202c939fd295e79e2"} Jan 05 21:54:24 crc kubenswrapper[5034]: I0105 21:54:24.513527 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a275aa4e-b275-4951-bc48-f6989fd0ecbc","Type":"ContainerStarted","Data":"1656a9e0364664ec1f3712363015129be2d27f24e92499652f3c3000ed4203f0"} Jan 05 21:54:24 crc kubenswrapper[5034]: I0105 21:54:24.527951 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.5279303669999997 podStartE2EDuration="3.527930367s" podCreationTimestamp="2026-01-05 21:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:24.524643664 +0000 UTC m=+156.896643103" watchObservedRunningTime="2026-01-05 21:54:24.527930367 +0000 UTC m=+156.899929816" Jan 05 21:54:25 crc kubenswrapper[5034]: I0105 21:54:25.543882 5034 generic.go:334] "Generic (PLEG): container finished" podID="a275aa4e-b275-4951-bc48-f6989fd0ecbc" containerID="1656a9e0364664ec1f3712363015129be2d27f24e92499652f3c3000ed4203f0" exitCode=0 Jan 05 21:54:25 crc kubenswrapper[5034]: I0105 21:54:25.543923 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a275aa4e-b275-4951-bc48-f6989fd0ecbc","Type":"ContainerDied","Data":"1656a9e0364664ec1f3712363015129be2d27f24e92499652f3c3000ed4203f0"} Jan 05 21:54:26 crc kubenswrapper[5034]: I0105 21:54:26.157625 5034 patch_prober.go:28] interesting pod/downloads-7954f5f757-xz2sp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 05 21:54:26 crc kubenswrapper[5034]: I0105 21:54:26.157703 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xz2sp" podUID="f4747c26-8a6b-4d60-ae91-36f9d7b86f14" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 05 21:54:26 crc kubenswrapper[5034]: I0105 21:54:26.157990 5034 patch_prober.go:28] interesting pod/downloads-7954f5f757-xz2sp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 05 21:54:26 crc kubenswrapper[5034]: I0105 21:54:26.158089 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xz2sp" podUID="f4747c26-8a6b-4d60-ae91-36f9d7b86f14" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 05 21:54:26 crc kubenswrapper[5034]: I0105 21:54:26.572858 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:26 crc kubenswrapper[5034]: I0105 21:54:26.577557 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8wssg" Jan 05 21:54:29 crc kubenswrapper[5034]: I0105 21:54:29.364513 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:54:29 crc kubenswrapper[5034]: I0105 21:54:29.381615 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7949c792-bd35-4fb3-9235-402a13c61026-metrics-certs\") pod \"network-metrics-daemon-99zr4\" (UID: \"7949c792-bd35-4fb3-9235-402a13c61026\") " pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:54:29 crc kubenswrapper[5034]: I0105 21:54:29.454703 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99zr4" Jan 05 21:54:35 crc kubenswrapper[5034]: I0105 21:54:35.218264 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:54:36 crc kubenswrapper[5034]: I0105 21:54:36.183674 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xz2sp" Jan 05 21:54:40 crc kubenswrapper[5034]: I0105 21:54:40.634504 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:54:40 crc kubenswrapper[5034]: I0105 21:54:40.687449 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a275aa4e-b275-4951-bc48-f6989fd0ecbc","Type":"ContainerDied","Data":"cec4b0673bec98922a69850090ca15b10fee45dc7c50d48202c939fd295e79e2"} Jan 05 21:54:40 crc kubenswrapper[5034]: I0105 21:54:40.687508 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cec4b0673bec98922a69850090ca15b10fee45dc7c50d48202c939fd295e79e2" Jan 05 21:54:40 crc kubenswrapper[5034]: I0105 21:54:40.687563 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:54:40 crc kubenswrapper[5034]: I0105 21:54:40.825709 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kube-api-access\") pod \"a275aa4e-b275-4951-bc48-f6989fd0ecbc\" (UID: \"a275aa4e-b275-4951-bc48-f6989fd0ecbc\") " Jan 05 21:54:40 crc kubenswrapper[5034]: I0105 21:54:40.825793 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kubelet-dir\") pod \"a275aa4e-b275-4951-bc48-f6989fd0ecbc\" (UID: \"a275aa4e-b275-4951-bc48-f6989fd0ecbc\") " Jan 05 21:54:40 crc kubenswrapper[5034]: I0105 21:54:40.825916 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a275aa4e-b275-4951-bc48-f6989fd0ecbc" (UID: "a275aa4e-b275-4951-bc48-f6989fd0ecbc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:54:40 crc kubenswrapper[5034]: I0105 21:54:40.826027 5034 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:40 crc kubenswrapper[5034]: I0105 21:54:40.830374 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a275aa4e-b275-4951-bc48-f6989fd0ecbc" (UID: "a275aa4e-b275-4951-bc48-f6989fd0ecbc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:40 crc kubenswrapper[5034]: I0105 21:54:40.927439 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a275aa4e-b275-4951-bc48-f6989fd0ecbc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:46 crc kubenswrapper[5034]: I0105 21:54:46.632942 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wfd9f" Jan 05 21:54:50 crc kubenswrapper[5034]: I0105 21:54:50.468689 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:54:50 crc kubenswrapper[5034]: I0105 21:54:50.469271 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:54:50 crc kubenswrapper[5034]: E0105 21:54:50.616301 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 05 21:54:50 crc kubenswrapper[5034]: E0105 21:54:50.616563 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxjt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-v7qh7_openshift-marketplace(d2634a68-a5ff-4370-bdf4-e41065a0b8ef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 21:54:50 crc kubenswrapper[5034]: E0105 21:54:50.617774 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-v7qh7" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" Jan 05 21:54:50 crc kubenswrapper[5034]: E0105 21:54:50.653943 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 05 21:54:50 crc kubenswrapper[5034]: E0105 21:54:50.654132 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn4hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vlt9r_openshift-marketplace(58104f59-4ae4-4e18-aa6a-6762a589e921): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 21:54:50 crc kubenswrapper[5034]: E0105 21:54:50.655639 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vlt9r" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" Jan 05 21:54:53 crc kubenswrapper[5034]: E0105 21:54:53.527449 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vlt9r" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" Jan 05 21:54:53 crc kubenswrapper[5034]: E0105 21:54:53.527611 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-v7qh7" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" Jan 05 21:54:54 crc kubenswrapper[5034]: I0105 21:54:54.874602 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.151672 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.151825 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb42j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-84nh6_openshift-marketplace(66409272-43c4-46a0-8a57-c34201f689f2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.153040 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-84nh6" podUID="66409272-43c4-46a0-8a57-c34201f689f2" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.246962 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.249703 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9dbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rwss9_openshift-marketplace(4b263441-0124-45fe-8cc0-14aa272246c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.260268 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rwss9" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.260402 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.260550 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqz9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-l8mqz_openshift-marketplace(0aa39adf-fc5d-44bf-a491-0ff564bd864c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.261873 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-l8mqz" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" Jan 05 21:54:55 crc kubenswrapper[5034]: I0105 21:54:55.556974 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-99zr4"] Jan 05 21:54:55 crc kubenswrapper[5034]: W0105 21:54:55.670015 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7949c792_bd35_4fb3_9235_402a13c61026.slice/crio-115dfdad089e90d5fc4110ce2bd252e1b33b5d4f6c1d6a6fb90822f3b7574ba0 WatchSource:0}: Error finding container 115dfdad089e90d5fc4110ce2bd252e1b33b5d4f6c1d6a6fb90822f3b7574ba0: Status 404 returned error can't find the container with id 115dfdad089e90d5fc4110ce2bd252e1b33b5d4f6c1d6a6fb90822f3b7574ba0 Jan 05 21:54:55 crc kubenswrapper[5034]: I0105 21:54:55.768735 5034 generic.go:334] "Generic (PLEG): container finished" podID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerID="2c8f5728b58fc06f77e3683f25da8bac68dbe494d2293be4458cf866bac41f04" exitCode=0 Jan 05 21:54:55 crc kubenswrapper[5034]: I0105 21:54:55.768791 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv6pz" event={"ID":"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7","Type":"ContainerDied","Data":"2c8f5728b58fc06f77e3683f25da8bac68dbe494d2293be4458cf866bac41f04"} Jan 05 21:54:55 crc kubenswrapper[5034]: I0105 21:54:55.773430 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99zr4" event={"ID":"7949c792-bd35-4fb3-9235-402a13c61026","Type":"ContainerStarted","Data":"115dfdad089e90d5fc4110ce2bd252e1b33b5d4f6c1d6a6fb90822f3b7574ba0"} Jan 05 21:54:55 crc kubenswrapper[5034]: I0105 21:54:55.775925 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q64th" event={"ID":"cbe086f6-53ed-4d3a-b442-74671a78f935","Type":"ContainerStarted","Data":"221782c4a04ae365ac750068f3c50b5d08e1e94242b3d3ed704868495bb91303"} Jan 05 21:54:55 crc kubenswrapper[5034]: I0105 21:54:55.778239 5034 generic.go:334] "Generic (PLEG): container finished" podID="31c821c2-4c3c-456b-b280-f20c523587ea" containerID="74dd0519f5a35ae38045a4a654921df08a4cdc2783f870dacdf479671fffc048" exitCode=0 Jan 05 21:54:55 crc kubenswrapper[5034]: I0105 21:54:55.779034 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twqvz" event={"ID":"31c821c2-4c3c-456b-b280-f20c523587ea","Type":"ContainerDied","Data":"74dd0519f5a35ae38045a4a654921df08a4cdc2783f870dacdf479671fffc048"} Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.780454 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-84nh6" podUID="66409272-43c4-46a0-8a57-c34201f689f2" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.780813 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rwss9" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" Jan 05 21:54:55 crc kubenswrapper[5034]: E0105 21:54:55.783465 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-l8mqz" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" Jan 05 21:54:56 crc kubenswrapper[5034]: I0105 21:54:56.786488 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99zr4" event={"ID":"7949c792-bd35-4fb3-9235-402a13c61026","Type":"ContainerStarted","Data":"010975c7e1c0f9b4181fbbdc6e3c927c6210758d9b7411ddf014529533baf23c"} Jan 05 21:54:56 crc kubenswrapper[5034]: I0105 21:54:56.786819 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99zr4" event={"ID":"7949c792-bd35-4fb3-9235-402a13c61026","Type":"ContainerStarted","Data":"ddcc4f66f912c46c5f8d18b9d5b36585da8c0aff908e3513be611e48800e39a0"} Jan 05 21:54:56 crc kubenswrapper[5034]: I0105 21:54:56.788500 5034 generic.go:334] "Generic (PLEG): container finished" podID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerID="221782c4a04ae365ac750068f3c50b5d08e1e94242b3d3ed704868495bb91303" exitCode=0 Jan 05 21:54:56 crc kubenswrapper[5034]: I0105 21:54:56.788553 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q64th" event={"ID":"cbe086f6-53ed-4d3a-b442-74671a78f935","Type":"ContainerDied","Data":"221782c4a04ae365ac750068f3c50b5d08e1e94242b3d3ed704868495bb91303"} Jan 05 21:54:56 crc kubenswrapper[5034]: I0105 21:54:56.791522 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twqvz" event={"ID":"31c821c2-4c3c-456b-b280-f20c523587ea","Type":"ContainerStarted","Data":"8037a34ba71c1830f02920a66a9cc413cc5aefc087d58191613f984a180afd50"} Jan 05 21:54:56 crc kubenswrapper[5034]: I0105 21:54:56.794334 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv6pz" event={"ID":"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7","Type":"ContainerStarted","Data":"cd0fb59f175b15ec63cb57a410f11e32615a76adc3637f60813478acd48cf6a3"} Jan 05 21:54:56 crc kubenswrapper[5034]: I0105 21:54:56.806568 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-99zr4" podStartSLOduration=170.806549141 podStartE2EDuration="2m50.806549141s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:56.799063139 +0000 UTC m=+189.171062578" watchObservedRunningTime="2026-01-05 21:54:56.806549141 +0000 UTC m=+189.178548580" Jan 05 21:54:56 crc kubenswrapper[5034]: I0105 21:54:56.834570 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kv6pz" podStartSLOduration=2.909366889 podStartE2EDuration="41.834555683s" podCreationTimestamp="2026-01-05 21:54:15 +0000 UTC" firstStartedPulling="2026-01-05 21:54:17.2699824 +0000 UTC m=+149.641981839" lastFinishedPulling="2026-01-05 21:54:56.195171184 +0000 UTC m=+188.567170633" observedRunningTime="2026-01-05 21:54:56.817797869 +0000 UTC m=+189.189797308" watchObservedRunningTime="2026-01-05 21:54:56.834555683 +0000 UTC m=+189.206555112" Jan 05 21:54:56 crc kubenswrapper[5034]: I0105 21:54:56.836352 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-twqvz" podStartSLOduration=3.027738843 podStartE2EDuration="40.836344974s" podCreationTimestamp="2026-01-05 21:54:16 +0000 UTC" firstStartedPulling="2026-01-05 21:54:18.360004402 +0000 UTC m=+150.732003831" lastFinishedPulling="2026-01-05 21:54:56.168610523 +0000 UTC m=+188.540609962" observedRunningTime="2026-01-05 21:54:56.833315138 +0000 UTC m=+189.205314587" watchObservedRunningTime="2026-01-05 21:54:56.836344974 +0000 UTC m=+189.208344403" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.379273 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 05 21:54:57 crc kubenswrapper[5034]: E0105 21:54:57.379515 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a275aa4e-b275-4951-bc48-f6989fd0ecbc" containerName="pruner" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.379529 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a275aa4e-b275-4951-bc48-f6989fd0ecbc" containerName="pruner" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.379644 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a275aa4e-b275-4951-bc48-f6989fd0ecbc" containerName="pruner" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.380248 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.381631 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.381975 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.393759 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.430538 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565faa8d-d026-4341-8965-f4de9a1fa26c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"565faa8d-d026-4341-8965-f4de9a1fa26c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.430599 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/565faa8d-d026-4341-8965-f4de9a1fa26c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"565faa8d-d026-4341-8965-f4de9a1fa26c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.531879 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/565faa8d-d026-4341-8965-f4de9a1fa26c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"565faa8d-d026-4341-8965-f4de9a1fa26c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.532002 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565faa8d-d026-4341-8965-f4de9a1fa26c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"565faa8d-d026-4341-8965-f4de9a1fa26c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.532054 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565faa8d-d026-4341-8965-f4de9a1fa26c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"565faa8d-d026-4341-8965-f4de9a1fa26c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.549547 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/565faa8d-d026-4341-8965-f4de9a1fa26c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"565faa8d-d026-4341-8965-f4de9a1fa26c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:54:57 crc kubenswrapper[5034]: I0105 21:54:57.694824 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:54:58 crc kubenswrapper[5034]: I0105 21:54:58.184916 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 05 21:54:58 crc kubenswrapper[5034]: W0105 21:54:58.197773 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod565faa8d_d026_4341_8965_f4de9a1fa26c.slice/crio-bf7df9b9bb432c734eb69c0788bba42acfd77d00ba7f9dd609b1febe3dca646d WatchSource:0}: Error finding container bf7df9b9bb432c734eb69c0788bba42acfd77d00ba7f9dd609b1febe3dca646d: Status 404 returned error can't find the container with id bf7df9b9bb432c734eb69c0788bba42acfd77d00ba7f9dd609b1febe3dca646d Jan 05 21:54:58 crc kubenswrapper[5034]: I0105 21:54:58.808786 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"565faa8d-d026-4341-8965-f4de9a1fa26c","Type":"ContainerStarted","Data":"a122a08c410383c4dc84f4fc35b87aaa65469a4718ea585c4f56f2e6940cf490"} Jan 05 21:54:58 crc kubenswrapper[5034]: I0105 21:54:58.809033 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"565faa8d-d026-4341-8965-f4de9a1fa26c","Type":"ContainerStarted","Data":"bf7df9b9bb432c734eb69c0788bba42acfd77d00ba7f9dd609b1febe3dca646d"} Jan 05 21:54:58 crc kubenswrapper[5034]: I0105 21:54:58.811505 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q64th" event={"ID":"cbe086f6-53ed-4d3a-b442-74671a78f935","Type":"ContainerStarted","Data":"5b215a8d5bc7a86772186b8495b0eedfa22054f2c68840bf5a11269cc482b187"} Jan 05 21:54:58 crc kubenswrapper[5034]: I0105 21:54:58.829180 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.82909096 podStartE2EDuration="1.82909096s" podCreationTimestamp="2026-01-05 21:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:58.820909939 +0000 UTC m=+191.192909398" watchObservedRunningTime="2026-01-05 21:54:58.82909096 +0000 UTC m=+191.201090399" Jan 05 21:54:58 crc kubenswrapper[5034]: I0105 21:54:58.846740 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q64th" podStartSLOduration=3.476008934 podStartE2EDuration="41.846724159s" podCreationTimestamp="2026-01-05 21:54:17 +0000 UTC" firstStartedPulling="2026-01-05 21:54:19.393435053 +0000 UTC m=+151.765434492" lastFinishedPulling="2026-01-05 21:54:57.764150278 +0000 UTC m=+190.136149717" observedRunningTime="2026-01-05 21:54:58.842750036 +0000 UTC m=+191.214749475" watchObservedRunningTime="2026-01-05 21:54:58.846724159 +0000 UTC m=+191.218723598" Jan 05 21:54:59 crc kubenswrapper[5034]: I0105 21:54:59.818397 5034 generic.go:334] "Generic (PLEG): container finished" podID="565faa8d-d026-4341-8965-f4de9a1fa26c" containerID="a122a08c410383c4dc84f4fc35b87aaa65469a4718ea585c4f56f2e6940cf490" exitCode=0 Jan 05 21:54:59 crc kubenswrapper[5034]: I0105 21:54:59.818477 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"565faa8d-d026-4341-8965-f4de9a1fa26c","Type":"ContainerDied","Data":"a122a08c410383c4dc84f4fc35b87aaa65469a4718ea585c4f56f2e6940cf490"} Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.068745 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.085480 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/565faa8d-d026-4341-8965-f4de9a1fa26c-kube-api-access\") pod \"565faa8d-d026-4341-8965-f4de9a1fa26c\" (UID: \"565faa8d-d026-4341-8965-f4de9a1fa26c\") " Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.085529 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565faa8d-d026-4341-8965-f4de9a1fa26c-kubelet-dir\") pod \"565faa8d-d026-4341-8965-f4de9a1fa26c\" (UID: \"565faa8d-d026-4341-8965-f4de9a1fa26c\") " Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.085986 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/565faa8d-d026-4341-8965-f4de9a1fa26c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "565faa8d-d026-4341-8965-f4de9a1fa26c" (UID: "565faa8d-d026-4341-8965-f4de9a1fa26c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.093584 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565faa8d-d026-4341-8965-f4de9a1fa26c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "565faa8d-d026-4341-8965-f4de9a1fa26c" (UID: "565faa8d-d026-4341-8965-f4de9a1fa26c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.186750 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/565faa8d-d026-4341-8965-f4de9a1fa26c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.186783 5034 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565faa8d-d026-4341-8965-f4de9a1fa26c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.829188 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"565faa8d-d026-4341-8965-f4de9a1fa26c","Type":"ContainerDied","Data":"bf7df9b9bb432c734eb69c0788bba42acfd77d00ba7f9dd609b1febe3dca646d"} Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.829237 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf7df9b9bb432c734eb69c0788bba42acfd77d00ba7f9dd609b1febe3dca646d" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.829824 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.979315 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 05 21:55:01 crc kubenswrapper[5034]: E0105 21:55:01.979636 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565faa8d-d026-4341-8965-f4de9a1fa26c" containerName="pruner" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.979650 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="565faa8d-d026-4341-8965-f4de9a1fa26c" containerName="pruner" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.979774 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="565faa8d-d026-4341-8965-f4de9a1fa26c" containerName="pruner" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.980271 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.983251 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.983524 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.990874 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.996050 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-var-lock\") pod \"installer-9-crc\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.996140 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:01 crc kubenswrapper[5034]: I0105 21:55:01.996184 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kube-api-access\") pod \"installer-9-crc\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:02 crc kubenswrapper[5034]: I0105 21:55:02.097273 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-var-lock\") pod \"installer-9-crc\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:02 crc kubenswrapper[5034]: I0105 21:55:02.097579 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:02 crc kubenswrapper[5034]: I0105 21:55:02.097503 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-var-lock\") pod \"installer-9-crc\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:02 crc kubenswrapper[5034]: I0105 21:55:02.097611 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kube-api-access\") pod \"installer-9-crc\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:02 crc kubenswrapper[5034]: I0105 21:55:02.097753 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:02 crc kubenswrapper[5034]: I0105 21:55:02.117151 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kube-api-access\") pod \"installer-9-crc\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:02 crc kubenswrapper[5034]: I0105 21:55:02.298700 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:02 crc kubenswrapper[5034]: I0105 21:55:02.699446 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 05 21:55:02 crc kubenswrapper[5034]: I0105 21:55:02.835753 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05fed08d-c55d-4e3f-8940-a6af0cdd5f77","Type":"ContainerStarted","Data":"6d2d90fc7bcb864d25fa11eecbd373226db1c979df66eaf53c355993b447ddff"} Jan 05 21:55:03 crc kubenswrapper[5034]: I0105 21:55:03.843099 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05fed08d-c55d-4e3f-8940-a6af0cdd5f77","Type":"ContainerStarted","Data":"cb33fe18e08e9d9c915bc8bb49a715ba849a0c9699ec5cf72ede45b24bb08344"} Jan 05 21:55:03 crc kubenswrapper[5034]: I0105 21:55:03.860323 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.860303204 podStartE2EDuration="2.860303204s" podCreationTimestamp="2026-01-05 21:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:55:03.854008706 +0000 UTC m=+196.226008145" watchObservedRunningTime="2026-01-05 21:55:03.860303204 +0000 UTC m=+196.232302643" Jan 05 21:55:06 crc kubenswrapper[5034]: I0105 21:55:06.354593 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:55:06 crc kubenswrapper[5034]: I0105 21:55:06.354652 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:55:06 crc kubenswrapper[5034]: I0105 21:55:06.419412 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:55:06 crc kubenswrapper[5034]: I0105 21:55:06.738274 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:55:06 crc kubenswrapper[5034]: I0105 21:55:06.738339 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:55:06 crc kubenswrapper[5034]: I0105 21:55:06.779047 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:55:06 crc kubenswrapper[5034]: I0105 21:55:06.859146 5034 generic.go:334] "Generic (PLEG): container finished" podID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerID="6b6275b08a9c04f305b3b36071c2fd298c7c1bd96af02daba03b4b0bea7b576d" exitCode=0 Jan 05 21:55:06 crc kubenswrapper[5034]: I0105 21:55:06.859229 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlt9r" event={"ID":"58104f59-4ae4-4e18-aa6a-6762a589e921","Type":"ContainerDied","Data":"6b6275b08a9c04f305b3b36071c2fd298c7c1bd96af02daba03b4b0bea7b576d"} Jan 05 21:55:06 crc kubenswrapper[5034]: I0105 21:55:06.903427 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:55:06 crc kubenswrapper[5034]: I0105 21:55:06.905189 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:55:07 crc kubenswrapper[5034]: I0105 21:55:07.914851 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:55:07 crc kubenswrapper[5034]: I0105 21:55:07.915219 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:55:08 crc kubenswrapper[5034]: I0105 21:55:08.310648 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:55:08 crc kubenswrapper[5034]: I0105 21:55:08.906476 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:55:10 crc kubenswrapper[5034]: I0105 21:55:10.881341 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlt9r" event={"ID":"58104f59-4ae4-4e18-aa6a-6762a589e921","Type":"ContainerStarted","Data":"a9dceb956f364b938713ba50e524c6b2f74b5a715a9b4acadb60e76df5249167"} Jan 05 21:55:10 crc kubenswrapper[5034]: I0105 21:55:10.899025 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vlt9r" podStartSLOduration=3.249142566 podStartE2EDuration="56.89900895s" podCreationTimestamp="2026-01-05 21:54:14 +0000 UTC" firstStartedPulling="2026-01-05 21:54:16.226676859 +0000 UTC m=+148.598676298" lastFinishedPulling="2026-01-05 21:55:09.876543243 +0000 UTC m=+202.248542682" observedRunningTime="2026-01-05 21:55:10.897171348 +0000 UTC m=+203.269170787" watchObservedRunningTime="2026-01-05 21:55:10.89900895 +0000 UTC m=+203.271008389" Jan 05 21:55:11 crc kubenswrapper[5034]: I0105 21:55:11.063420 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twqvz"] Jan 05 21:55:11 crc kubenswrapper[5034]: I0105 21:55:11.063634 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-twqvz" podUID="31c821c2-4c3c-456b-b280-f20c523587ea" containerName="registry-server" containerID="cri-o://8037a34ba71c1830f02920a66a9cc413cc5aefc087d58191613f984a180afd50" gracePeriod=2 Jan 05 21:55:11 crc kubenswrapper[5034]: I0105 21:55:11.264102 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q64th"] Jan 05 21:55:11 crc kubenswrapper[5034]: I0105 21:55:11.264308 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q64th" podUID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerName="registry-server" containerID="cri-o://5b215a8d5bc7a86772186b8495b0eedfa22054f2c68840bf5a11269cc482b187" gracePeriod=2 Jan 05 21:55:12 crc kubenswrapper[5034]: I0105 21:55:12.890752 5034 generic.go:334] "Generic (PLEG): container finished" podID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerID="5b215a8d5bc7a86772186b8495b0eedfa22054f2c68840bf5a11269cc482b187" exitCode=0 Jan 05 21:55:12 crc kubenswrapper[5034]: I0105 21:55:12.891126 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q64th" event={"ID":"cbe086f6-53ed-4d3a-b442-74671a78f935","Type":"ContainerDied","Data":"5b215a8d5bc7a86772186b8495b0eedfa22054f2c68840bf5a11269cc482b187"} Jan 05 21:55:12 crc kubenswrapper[5034]: I0105 21:55:12.892808 5034 generic.go:334] "Generic (PLEG): container finished" podID="31c821c2-4c3c-456b-b280-f20c523587ea" containerID="8037a34ba71c1830f02920a66a9cc413cc5aefc087d58191613f984a180afd50" exitCode=0 Jan 05 21:55:12 crc kubenswrapper[5034]: I0105 21:55:12.892832 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twqvz" event={"ID":"31c821c2-4c3c-456b-b280-f20c523587ea","Type":"ContainerDied","Data":"8037a34ba71c1830f02920a66a9cc413cc5aefc087d58191613f984a180afd50"} Jan 05 21:55:14 crc kubenswrapper[5034]: I0105 21:55:14.525469 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:55:14 crc kubenswrapper[5034]: I0105 21:55:14.525520 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:55:14 crc kubenswrapper[5034]: I0105 21:55:14.564197 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:55:14 crc kubenswrapper[5034]: I0105 21:55:14.947504 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:55:14 crc kubenswrapper[5034]: I0105 21:55:14.989041 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.095097 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-catalog-content\") pod \"31c821c2-4c3c-456b-b280-f20c523587ea\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.095182 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-catalog-content\") pod \"cbe086f6-53ed-4d3a-b442-74671a78f935\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.095211 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-utilities\") pod \"cbe086f6-53ed-4d3a-b442-74671a78f935\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.095259 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znkg4\" (UniqueName: \"kubernetes.io/projected/31c821c2-4c3c-456b-b280-f20c523587ea-kube-api-access-znkg4\") pod \"31c821c2-4c3c-456b-b280-f20c523587ea\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.095290 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-utilities\") pod \"31c821c2-4c3c-456b-b280-f20c523587ea\" (UID: \"31c821c2-4c3c-456b-b280-f20c523587ea\") " Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.095306 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcvp4\" (UniqueName: \"kubernetes.io/projected/cbe086f6-53ed-4d3a-b442-74671a78f935-kube-api-access-mcvp4\") pod \"cbe086f6-53ed-4d3a-b442-74671a78f935\" (UID: \"cbe086f6-53ed-4d3a-b442-74671a78f935\") " Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.095984 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-utilities" (OuterVolumeSpecName: "utilities") pod "cbe086f6-53ed-4d3a-b442-74671a78f935" (UID: "cbe086f6-53ed-4d3a-b442-74671a78f935"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.097219 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-utilities" (OuterVolumeSpecName: "utilities") pod "31c821c2-4c3c-456b-b280-f20c523587ea" (UID: "31c821c2-4c3c-456b-b280-f20c523587ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.103219 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe086f6-53ed-4d3a-b442-74671a78f935-kube-api-access-mcvp4" (OuterVolumeSpecName: "kube-api-access-mcvp4") pod "cbe086f6-53ed-4d3a-b442-74671a78f935" (UID: "cbe086f6-53ed-4d3a-b442-74671a78f935"). InnerVolumeSpecName "kube-api-access-mcvp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.103274 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c821c2-4c3c-456b-b280-f20c523587ea-kube-api-access-znkg4" (OuterVolumeSpecName: "kube-api-access-znkg4") pod "31c821c2-4c3c-456b-b280-f20c523587ea" (UID: "31c821c2-4c3c-456b-b280-f20c523587ea"). InnerVolumeSpecName "kube-api-access-znkg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.129597 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31c821c2-4c3c-456b-b280-f20c523587ea" (UID: "31c821c2-4c3c-456b-b280-f20c523587ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.196567 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znkg4\" (UniqueName: \"kubernetes.io/projected/31c821c2-4c3c-456b-b280-f20c523587ea-kube-api-access-znkg4\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.196605 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.196614 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcvp4\" (UniqueName: \"kubernetes.io/projected/cbe086f6-53ed-4d3a-b442-74671a78f935-kube-api-access-mcvp4\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.196623 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c821c2-4c3c-456b-b280-f20c523587ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.196632 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.221836 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbe086f6-53ed-4d3a-b442-74671a78f935" (UID: "cbe086f6-53ed-4d3a-b442-74671a78f935"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.298069 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbe086f6-53ed-4d3a-b442-74671a78f935-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.907548 5034 generic.go:334] "Generic (PLEG): container finished" podID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerID="9a86b2b39594dfa2b8d733510ce159b73bf639e5d9e9d97ed9c369d4b2e4d8ad" exitCode=0 Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.907639 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7qh7" event={"ID":"d2634a68-a5ff-4370-bdf4-e41065a0b8ef","Type":"ContainerDied","Data":"9a86b2b39594dfa2b8d733510ce159b73bf639e5d9e9d97ed9c369d4b2e4d8ad"} Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.909222 5034 generic.go:334] "Generic (PLEG): container finished" podID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerID="f855c3d2a92d27ec9978a0c17631728cd9f9ddc9b1ecb369577fbe20ad78f7fa" exitCode=0 Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.909254 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8mqz" event={"ID":"0aa39adf-fc5d-44bf-a491-0ff564bd864c","Type":"ContainerDied","Data":"f855c3d2a92d27ec9978a0c17631728cd9f9ddc9b1ecb369577fbe20ad78f7fa"} Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.913012 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q64th" event={"ID":"cbe086f6-53ed-4d3a-b442-74671a78f935","Type":"ContainerDied","Data":"282290bf2af7fe45a2cd6be6ef0e7a0eb3e72557966e6fdf89e2de668435352b"} Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.913050 5034 scope.go:117] "RemoveContainer" containerID="5b215a8d5bc7a86772186b8495b0eedfa22054f2c68840bf5a11269cc482b187" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.913166 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q64th" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.915896 5034 generic.go:334] "Generic (PLEG): container finished" podID="4b263441-0124-45fe-8cc0-14aa272246c3" containerID="abd82277f8180cd4676f5f58d583adba5c584f9436331406086634138163f579" exitCode=0 Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.915939 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwss9" event={"ID":"4b263441-0124-45fe-8cc0-14aa272246c3","Type":"ContainerDied","Data":"abd82277f8180cd4676f5f58d583adba5c584f9436331406086634138163f579"} Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.920930 5034 generic.go:334] "Generic (PLEG): container finished" podID="66409272-43c4-46a0-8a57-c34201f689f2" containerID="6aaefb72b0848d5fa4232e5608958b26da11bd2910a18932c829db5abdea7005" exitCode=0 Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.920996 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84nh6" event={"ID":"66409272-43c4-46a0-8a57-c34201f689f2","Type":"ContainerDied","Data":"6aaefb72b0848d5fa4232e5608958b26da11bd2910a18932c829db5abdea7005"} Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.930591 5034 scope.go:117] "RemoveContainer" containerID="221782c4a04ae365ac750068f3c50b5d08e1e94242b3d3ed704868495bb91303" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.930878 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twqvz" event={"ID":"31c821c2-4c3c-456b-b280-f20c523587ea","Type":"ContainerDied","Data":"9918dfd95b578c97da7910efb098bc083c81825fce56ebacb8594ac5ce76189c"} Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.931125 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twqvz" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.962429 5034 scope.go:117] "RemoveContainer" containerID="c45c791efa67e4818333eda9301c501f4e42128dd2ac70ae9eb8190bc49a9a11" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.984551 5034 scope.go:117] "RemoveContainer" containerID="8037a34ba71c1830f02920a66a9cc413cc5aefc087d58191613f984a180afd50" Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.988367 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q64th"] Jan 05 21:55:15 crc kubenswrapper[5034]: I0105 21:55:15.993122 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q64th"] Jan 05 21:55:16 crc kubenswrapper[5034]: I0105 21:55:16.010998 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twqvz"] Jan 05 21:55:16 crc kubenswrapper[5034]: I0105 21:55:16.013803 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-twqvz"] Jan 05 21:55:16 crc kubenswrapper[5034]: I0105 21:55:16.014457 5034 scope.go:117] "RemoveContainer" containerID="74dd0519f5a35ae38045a4a654921df08a4cdc2783f870dacdf479671fffc048" Jan 05 21:55:16 crc kubenswrapper[5034]: I0105 21:55:16.030435 5034 scope.go:117] "RemoveContainer" containerID="4f912479640ff978150230fc1b518a2ddbb63f585df7a9d0a1559f077ac156c0" Jan 05 21:55:16 crc kubenswrapper[5034]: I0105 21:55:16.941760 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7qh7" event={"ID":"d2634a68-a5ff-4370-bdf4-e41065a0b8ef","Type":"ContainerStarted","Data":"73a3447a4d315d669d4d8b84e8d2c042aa34d3af27a37dc815c418ada453b988"} Jan 05 21:55:16 crc kubenswrapper[5034]: I0105 21:55:16.943340 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8mqz" event={"ID":"0aa39adf-fc5d-44bf-a491-0ff564bd864c","Type":"ContainerStarted","Data":"a79f73a48e3ddfda18f2cfd00c0aa1e555cf6989381fec11b6e7d6e4d1cd560a"} Jan 05 21:55:16 crc kubenswrapper[5034]: I0105 21:55:16.944756 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwss9" event={"ID":"4b263441-0124-45fe-8cc0-14aa272246c3","Type":"ContainerStarted","Data":"04004fa19986610ab776dc5ebc4c5a9bacad7fc058652945fab91308898485bb"} Jan 05 21:55:16 crc kubenswrapper[5034]: I0105 21:55:16.947227 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84nh6" event={"ID":"66409272-43c4-46a0-8a57-c34201f689f2","Type":"ContainerStarted","Data":"f1f10de5d391d07c449978b6d0d0c53da21cb11066b0205f8b0ebc66c63dff93"} Jan 05 21:55:16 crc kubenswrapper[5034]: I0105 21:55:16.983638 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l8mqz" podStartSLOduration=2.999322785 podStartE2EDuration="59.983622047s" podCreationTimestamp="2026-01-05 21:54:17 +0000 UTC" firstStartedPulling="2026-01-05 21:54:19.369442214 +0000 UTC m=+151.741441653" lastFinishedPulling="2026-01-05 21:55:16.353741476 +0000 UTC m=+208.725740915" observedRunningTime="2026-01-05 21:55:16.982457255 +0000 UTC m=+209.354456694" watchObservedRunningTime="2026-01-05 21:55:16.983622047 +0000 UTC m=+209.355621476" Jan 05 21:55:16 crc kubenswrapper[5034]: I0105 21:55:16.984278 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v7qh7" podStartSLOduration=2.797027922 podStartE2EDuration="1m2.984272395s" podCreationTimestamp="2026-01-05 21:54:14 +0000 UTC" firstStartedPulling="2026-01-05 21:54:16.181221644 +0000 UTC m=+148.553221083" lastFinishedPulling="2026-01-05 21:55:16.368466117 +0000 UTC m=+208.740465556" observedRunningTime="2026-01-05 21:55:16.964118563 +0000 UTC m=+209.336118002" watchObservedRunningTime="2026-01-05 21:55:16.984272395 +0000 UTC m=+209.356271834" Jan 05 21:55:17 crc kubenswrapper[5034]: I0105 21:55:17.019150 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-84nh6" podStartSLOduration=2.82668932 podStartE2EDuration="1m3.019127559s" podCreationTimestamp="2026-01-05 21:54:14 +0000 UTC" firstStartedPulling="2026-01-05 21:54:16.208884166 +0000 UTC m=+148.580883605" lastFinishedPulling="2026-01-05 21:55:16.401322405 +0000 UTC m=+208.773321844" observedRunningTime="2026-01-05 21:55:17.002835874 +0000 UTC m=+209.374835333" watchObservedRunningTime="2026-01-05 21:55:17.019127559 +0000 UTC m=+209.391126998" Jan 05 21:55:17 crc kubenswrapper[5034]: I0105 21:55:17.516675 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:55:17 crc kubenswrapper[5034]: I0105 21:55:17.516752 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:55:17 crc kubenswrapper[5034]: I0105 21:55:17.844923 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c821c2-4c3c-456b-b280-f20c523587ea" path="/var/lib/kubelet/pods/31c821c2-4c3c-456b-b280-f20c523587ea/volumes" Jan 05 21:55:17 crc kubenswrapper[5034]: I0105 21:55:17.845607 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe086f6-53ed-4d3a-b442-74671a78f935" path="/var/lib/kubelet/pods/cbe086f6-53ed-4d3a-b442-74671a78f935/volumes" Jan 05 21:55:18 crc kubenswrapper[5034]: I0105 21:55:18.555452 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l8mqz" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerName="registry-server" probeResult="failure" output=< Jan 05 21:55:18 crc kubenswrapper[5034]: timeout: failed to connect service ":50051" within 1s Jan 05 21:55:18 crc kubenswrapper[5034]: > Jan 05 21:55:20 crc kubenswrapper[5034]: I0105 21:55:20.468514 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:55:20 crc kubenswrapper[5034]: I0105 21:55:20.468834 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:55:20 crc kubenswrapper[5034]: I0105 21:55:20.468896 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:55:20 crc kubenswrapper[5034]: I0105 21:55:20.469406 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:55:20 crc kubenswrapper[5034]: I0105 21:55:20.469504 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2" gracePeriod=600 Jan 05 21:55:21 crc kubenswrapper[5034]: I0105 21:55:21.976742 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2" exitCode=0 Jan 05 21:55:21 crc kubenswrapper[5034]: I0105 21:55:21.976822 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2"} Jan 05 21:55:22 crc kubenswrapper[5034]: I0105 21:55:22.985503 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"40d2eb8f27d98116792dd8c5580bff63065eb020012694353293a0840e15892d"} Jan 05 21:55:23 crc kubenswrapper[5034]: I0105 21:55:23.006484 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rwss9" podStartSLOduration=9.878781318 podStartE2EDuration="1m10.0064447s" podCreationTimestamp="2026-01-05 21:54:13 +0000 UTC" firstStartedPulling="2026-01-05 21:54:16.203400911 +0000 UTC m=+148.575400350" lastFinishedPulling="2026-01-05 21:55:16.331064293 +0000 UTC m=+208.703063732" observedRunningTime="2026-01-05 21:55:17.022201315 +0000 UTC m=+209.394200754" watchObservedRunningTime="2026-01-05 21:55:23.0064447 +0000 UTC m=+215.378444179" Jan 05 21:55:24 crc kubenswrapper[5034]: I0105 21:55:24.283975 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:55:24 crc kubenswrapper[5034]: I0105 21:55:24.284319 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:55:24 crc kubenswrapper[5034]: I0105 21:55:24.327072 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:55:24 crc kubenswrapper[5034]: I0105 21:55:24.562537 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:55:24 crc kubenswrapper[5034]: I0105 21:55:24.683575 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:55:24 crc kubenswrapper[5034]: I0105 21:55:24.683634 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:55:24 crc kubenswrapper[5034]: I0105 21:55:24.719427 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:55:24 crc kubenswrapper[5034]: I0105 21:55:24.984713 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:55:24 crc kubenswrapper[5034]: I0105 21:55:24.985109 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:55:25 crc kubenswrapper[5034]: I0105 21:55:25.026045 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:55:25 crc kubenswrapper[5034]: I0105 21:55:25.037010 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:55:25 crc kubenswrapper[5034]: I0105 21:55:25.037249 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:55:25 crc kubenswrapper[5034]: I0105 21:55:25.390071 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g2l2g"] Jan 05 21:55:25 crc kubenswrapper[5034]: I0105 21:55:25.954474 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84nh6"] Jan 05 21:55:26 crc kubenswrapper[5034]: I0105 21:55:26.040574 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:55:27 crc kubenswrapper[5034]: I0105 21:55:27.004429 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-84nh6" podUID="66409272-43c4-46a0-8a57-c34201f689f2" containerName="registry-server" containerID="cri-o://f1f10de5d391d07c449978b6d0d0c53da21cb11066b0205f8b0ebc66c63dff93" gracePeriod=2 Jan 05 21:55:27 crc kubenswrapper[5034]: I0105 21:55:27.357597 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v7qh7"] Jan 05 21:55:27 crc kubenswrapper[5034]: I0105 21:55:27.560314 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:55:27 crc kubenswrapper[5034]: I0105 21:55:27.599000 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:55:28 crc kubenswrapper[5034]: I0105 21:55:28.012661 5034 generic.go:334] "Generic (PLEG): container finished" podID="66409272-43c4-46a0-8a57-c34201f689f2" containerID="f1f10de5d391d07c449978b6d0d0c53da21cb11066b0205f8b0ebc66c63dff93" exitCode=0 Jan 05 21:55:28 crc kubenswrapper[5034]: I0105 21:55:28.012747 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84nh6" event={"ID":"66409272-43c4-46a0-8a57-c34201f689f2","Type":"ContainerDied","Data":"f1f10de5d391d07c449978b6d0d0c53da21cb11066b0205f8b0ebc66c63dff93"} Jan 05 21:55:28 crc kubenswrapper[5034]: I0105 21:55:28.012955 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v7qh7" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerName="registry-server" containerID="cri-o://73a3447a4d315d669d4d8b84e8d2c042aa34d3af27a37dc815c418ada453b988" gracePeriod=2 Jan 05 21:55:28 crc kubenswrapper[5034]: I0105 21:55:28.957312 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.019760 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84nh6" event={"ID":"66409272-43c4-46a0-8a57-c34201f689f2","Type":"ContainerDied","Data":"0b15f0252fa1cfc2a5ba072512a62ff287509abd0b14daf50b34694984f19a3e"} Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.019806 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84nh6" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.019819 5034 scope.go:117] "RemoveContainer" containerID="f1f10de5d391d07c449978b6d0d0c53da21cb11066b0205f8b0ebc66c63dff93" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.022904 5034 generic.go:334] "Generic (PLEG): container finished" podID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerID="73a3447a4d315d669d4d8b84e8d2c042aa34d3af27a37dc815c418ada453b988" exitCode=0 Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.022934 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7qh7" event={"ID":"d2634a68-a5ff-4370-bdf4-e41065a0b8ef","Type":"ContainerDied","Data":"73a3447a4d315d669d4d8b84e8d2c042aa34d3af27a37dc815c418ada453b988"} Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.046888 5034 scope.go:117] "RemoveContainer" containerID="6aaefb72b0848d5fa4232e5608958b26da11bd2910a18932c829db5abdea7005" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.066451 5034 scope.go:117] "RemoveContainer" containerID="d58549b14928f3a4bc931794203eb7b35c751cf077e9d7201ba494deb0cbfa10" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.078024 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-utilities\") pod \"66409272-43c4-46a0-8a57-c34201f689f2\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.078113 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-catalog-content\") pod \"66409272-43c4-46a0-8a57-c34201f689f2\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.078218 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb42j\" (UniqueName: \"kubernetes.io/projected/66409272-43c4-46a0-8a57-c34201f689f2-kube-api-access-bb42j\") pod \"66409272-43c4-46a0-8a57-c34201f689f2\" (UID: \"66409272-43c4-46a0-8a57-c34201f689f2\") " Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.079471 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-utilities" (OuterVolumeSpecName: "utilities") pod "66409272-43c4-46a0-8a57-c34201f689f2" (UID: "66409272-43c4-46a0-8a57-c34201f689f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.087260 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66409272-43c4-46a0-8a57-c34201f689f2-kube-api-access-bb42j" (OuterVolumeSpecName: "kube-api-access-bb42j") pod "66409272-43c4-46a0-8a57-c34201f689f2" (UID: "66409272-43c4-46a0-8a57-c34201f689f2"). InnerVolumeSpecName "kube-api-access-bb42j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.162134 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66409272-43c4-46a0-8a57-c34201f689f2" (UID: "66409272-43c4-46a0-8a57-c34201f689f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.179818 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb42j\" (UniqueName: \"kubernetes.io/projected/66409272-43c4-46a0-8a57-c34201f689f2-kube-api-access-bb42j\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.180235 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.180421 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66409272-43c4-46a0-8a57-c34201f689f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.285482 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.348902 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84nh6"] Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.350999 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-84nh6"] Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.381923 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-catalog-content\") pod \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.381985 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-utilities\") pod \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.382192 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxjt5\" (UniqueName: \"kubernetes.io/projected/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-kube-api-access-rxjt5\") pod \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\" (UID: \"d2634a68-a5ff-4370-bdf4-e41065a0b8ef\") " Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.383193 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-utilities" (OuterVolumeSpecName: "utilities") pod "d2634a68-a5ff-4370-bdf4-e41065a0b8ef" (UID: "d2634a68-a5ff-4370-bdf4-e41065a0b8ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.385425 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-kube-api-access-rxjt5" (OuterVolumeSpecName: "kube-api-access-rxjt5") pod "d2634a68-a5ff-4370-bdf4-e41065a0b8ef" (UID: "d2634a68-a5ff-4370-bdf4-e41065a0b8ef"). InnerVolumeSpecName "kube-api-access-rxjt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.440569 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2634a68-a5ff-4370-bdf4-e41065a0b8ef" (UID: "d2634a68-a5ff-4370-bdf4-e41065a0b8ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.484666 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxjt5\" (UniqueName: \"kubernetes.io/projected/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-kube-api-access-rxjt5\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.484705 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.484730 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2634a68-a5ff-4370-bdf4-e41065a0b8ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:29 crc kubenswrapper[5034]: I0105 21:55:29.845050 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66409272-43c4-46a0-8a57-c34201f689f2" path="/var/lib/kubelet/pods/66409272-43c4-46a0-8a57-c34201f689f2/volumes" Jan 05 21:55:30 crc kubenswrapper[5034]: I0105 21:55:30.034517 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7qh7" event={"ID":"d2634a68-a5ff-4370-bdf4-e41065a0b8ef","Type":"ContainerDied","Data":"54a58820952bb89a6ee12f5df1a035118e51d64d6e8a40bd3cf6409ec6b897ce"} Jan 05 21:55:30 crc kubenswrapper[5034]: I0105 21:55:30.034538 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7qh7" Jan 05 21:55:30 crc kubenswrapper[5034]: I0105 21:55:30.034570 5034 scope.go:117] "RemoveContainer" containerID="73a3447a4d315d669d4d8b84e8d2c042aa34d3af27a37dc815c418ada453b988" Jan 05 21:55:30 crc kubenswrapper[5034]: I0105 21:55:30.051257 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v7qh7"] Jan 05 21:55:30 crc kubenswrapper[5034]: I0105 21:55:30.055129 5034 scope.go:117] "RemoveContainer" containerID="9a86b2b39594dfa2b8d733510ce159b73bf639e5d9e9d97ed9c369d4b2e4d8ad" Jan 05 21:55:30 crc kubenswrapper[5034]: I0105 21:55:30.055839 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v7qh7"] Jan 05 21:55:30 crc kubenswrapper[5034]: I0105 21:55:30.069459 5034 scope.go:117] "RemoveContainer" containerID="42f9ebed98569d7f233fb2ce5841061854f572fff3a2e9bdb7dbf1edd2aea22b" Jan 05 21:55:31 crc kubenswrapper[5034]: I0105 21:55:31.845601 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" path="/var/lib/kubelet/pods/d2634a68-a5ff-4370-bdf4-e41065a0b8ef/volumes" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.211888 5034 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212455 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c821c2-4c3c-456b-b280-f20c523587ea" containerName="extract-utilities" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212471 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c821c2-4c3c-456b-b280-f20c523587ea" containerName="extract-utilities" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212486 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66409272-43c4-46a0-8a57-c34201f689f2" containerName="extract-utilities" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212494 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="66409272-43c4-46a0-8a57-c34201f689f2" containerName="extract-utilities" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212506 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerName="extract-content" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212512 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerName="extract-content" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212527 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c821c2-4c3c-456b-b280-f20c523587ea" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212534 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c821c2-4c3c-456b-b280-f20c523587ea" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212541 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerName="extract-utilities" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212547 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerName="extract-utilities" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212555 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66409272-43c4-46a0-8a57-c34201f689f2" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212560 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="66409272-43c4-46a0-8a57-c34201f689f2" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212567 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212573 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212583 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212589 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212598 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c821c2-4c3c-456b-b280-f20c523587ea" containerName="extract-content" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212605 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c821c2-4c3c-456b-b280-f20c523587ea" containerName="extract-content" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212615 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66409272-43c4-46a0-8a57-c34201f689f2" containerName="extract-content" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212621 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="66409272-43c4-46a0-8a57-c34201f689f2" containerName="extract-content" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212629 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerName="extract-content" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212636 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerName="extract-content" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.212646 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerName="extract-utilities" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212653 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerName="extract-utilities" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212757 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="66409272-43c4-46a0-8a57-c34201f689f2" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212777 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2634a68-a5ff-4370-bdf4-e41065a0b8ef" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212785 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c821c2-4c3c-456b-b280-f20c523587ea" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.212793 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe086f6-53ed-4d3a-b442-74671a78f935" containerName="registry-server" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.213168 5034 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.213346 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.213633 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0" gracePeriod=15 Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.213622 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5" gracePeriod=15 Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.213749 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb" gracePeriod=15 Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.213775 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab" gracePeriod=15 Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.213801 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411" gracePeriod=15 Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.215324 5034 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.215643 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.215678 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.215703 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.215716 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.215732 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.215746 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.215764 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.215776 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.215791 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.215804 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.215818 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.215829 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.215843 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.215855 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.216023 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.216040 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.216057 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.216068 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.216135 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.216159 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.261380 5034 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.332158 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.332197 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.332219 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.332406 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.332429 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.433772 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.433813 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.433848 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.433865 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.433883 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.433911 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.433902 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.433935 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.433947 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.433990 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.434023 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.434043 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.434129 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.535024 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.535105 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.535115 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.535182 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.535191 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.535305 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: I0105 21:55:41.568495 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:41 crc kubenswrapper[5034]: E0105 21:55:41.588429 5034 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.156:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1887f4703882542b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 21:55:41.587788843 +0000 UTC m=+233.959788272,LastTimestamp:2026-01-05 21:55:41.587788843 +0000 UTC m=+233.959788272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.096217 5034 generic.go:334] "Generic (PLEG): container finished" podID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" containerID="cb33fe18e08e9d9c915bc8bb49a715ba849a0c9699ec5cf72ede45b24bb08344" exitCode=0 Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.096268 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05fed08d-c55d-4e3f-8940-a6af0cdd5f77","Type":"ContainerDied","Data":"cb33fe18e08e9d9c915bc8bb49a715ba849a0c9699ec5cf72ede45b24bb08344"} Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.097132 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.099750 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.101193 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.102283 5034 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab" exitCode=0 Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.102320 5034 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb" exitCode=0 Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.102328 5034 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0" exitCode=0 Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.102335 5034 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411" exitCode=2 Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.102398 5034 scope.go:117] "RemoveContainer" containerID="fe337f8bba552223f13f9664eeb247f437c83414bb7614183067874e79a6db07" Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.103968 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6aa32773c1b5728d7780961ca05c9d3e9ecb8586b931ba1fbe382481aed55397"} Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.103998 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"60ab51adaa2225c0711df92b42a0100cfb4d8a6003b71298127f8cca7e02d03d"} Jan 05 21:55:42 crc kubenswrapper[5034]: E0105 21:55:42.104567 5034 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:55:42 crc kubenswrapper[5034]: I0105 21:55:42.104836 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.112655 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.398781 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.399737 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.463997 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kube-api-access\") pod \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.464151 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-var-lock\") pod \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.464223 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kubelet-dir\") pod \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\" (UID: \"05fed08d-c55d-4e3f-8940-a6af0cdd5f77\") " Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.464278 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-var-lock" (OuterVolumeSpecName: "var-lock") pod "05fed08d-c55d-4e3f-8940-a6af0cdd5f77" (UID: "05fed08d-c55d-4e3f-8940-a6af0cdd5f77"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.464397 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05fed08d-c55d-4e3f-8940-a6af0cdd5f77" (UID: "05fed08d-c55d-4e3f-8940-a6af0cdd5f77"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.464494 5034 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.464516 5034 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-var-lock\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.472884 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05fed08d-c55d-4e3f-8940-a6af0cdd5f77" (UID: "05fed08d-c55d-4e3f-8940-a6af0cdd5f77"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.565448 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05fed08d-c55d-4e3f-8940-a6af0cdd5f77-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.573977 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.575038 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.575560 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.575838 5034 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.669400 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.669466 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.669513 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.669652 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.669683 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.669752 5034 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.669793 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.770316 5034 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.770346 5034 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:43 crc kubenswrapper[5034]: I0105 21:55:43.844652 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.120834 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.121589 5034 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5" exitCode=0 Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.121723 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.121885 5034 scope.go:117] "RemoveContainer" containerID="6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.122388 5034 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.122601 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.123048 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05fed08d-c55d-4e3f-8940-a6af0cdd5f77","Type":"ContainerDied","Data":"6d2d90fc7bcb864d25fa11eecbd373226db1c979df66eaf53c355993b447ddff"} Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.123087 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d2d90fc7bcb864d25fa11eecbd373226db1c979df66eaf53c355993b447ddff" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.123151 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.126186 5034 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.126557 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.127637 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.127954 5034 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.135540 5034 scope.go:117] "RemoveContainer" containerID="496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.152853 5034 scope.go:117] "RemoveContainer" containerID="4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.170140 5034 scope.go:117] "RemoveContainer" containerID="98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.185668 5034 scope.go:117] "RemoveContainer" containerID="09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.201366 5034 scope.go:117] "RemoveContainer" containerID="b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.219615 5034 scope.go:117] "RemoveContainer" containerID="6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab" Jan 05 21:55:44 crc kubenswrapper[5034]: E0105 21:55:44.220261 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\": container with ID starting with 6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab not found: ID does not exist" containerID="6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.220312 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab"} err="failed to get container status \"6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\": rpc error: code = NotFound desc = could not find container \"6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab\": container with ID starting with 6d7d78cc108f4798340d83c2d2dc18b867333f6341dffbfef14f0f8bde264bab not found: ID does not exist" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.220354 5034 scope.go:117] "RemoveContainer" containerID="496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb" Jan 05 21:55:44 crc kubenswrapper[5034]: E0105 21:55:44.220724 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\": container with ID starting with 496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb not found: ID does not exist" containerID="496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.220752 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb"} err="failed to get container status \"496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\": rpc error: code = NotFound desc = could not find container \"496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb\": container with ID starting with 496e4515a4cc7c988875f0543e546b974ab5d499704eee8d6f98b94f576a69bb not found: ID does not exist" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.220769 5034 scope.go:117] "RemoveContainer" containerID="4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0" Jan 05 21:55:44 crc kubenswrapper[5034]: E0105 21:55:44.221235 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\": container with ID starting with 4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0 not found: ID does not exist" containerID="4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.221297 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0"} err="failed to get container status \"4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\": rpc error: code = NotFound desc = could not find container \"4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0\": container with ID starting with 4014064a26b9ae988579f8fadb97e09f08493bcb4e5330da1df26cd01664c0a0 not found: ID does not exist" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.221327 5034 scope.go:117] "RemoveContainer" containerID="98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411" Jan 05 21:55:44 crc kubenswrapper[5034]: E0105 21:55:44.221596 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\": container with ID starting with 98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411 not found: ID does not exist" containerID="98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.221621 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411"} err="failed to get container status \"98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\": rpc error: code = NotFound desc = could not find container \"98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411\": container with ID starting with 98c7e5ef6c0692850480da73cfeae820b71a87f8164be319c5ca08dbdb834411 not found: ID does not exist" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.221636 5034 scope.go:117] "RemoveContainer" containerID="09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5" Jan 05 21:55:44 crc kubenswrapper[5034]: E0105 21:55:44.222400 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\": container with ID starting with 09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5 not found: ID does not exist" containerID="09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.222457 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5"} err="failed to get container status \"09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\": rpc error: code = NotFound desc = could not find container \"09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5\": container with ID starting with 09a172ecf0f3615383f079cead0420659c2bdd7faa04251933c57d013dfe2ed5 not found: ID does not exist" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.222478 5034 scope.go:117] "RemoveContainer" containerID="b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36" Jan 05 21:55:44 crc kubenswrapper[5034]: E0105 21:55:44.222871 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\": container with ID starting with b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36 not found: ID does not exist" containerID="b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36" Jan 05 21:55:44 crc kubenswrapper[5034]: I0105 21:55:44.222894 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36"} err="failed to get container status \"b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\": rpc error: code = NotFound desc = could not find container \"b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36\": container with ID starting with b5b5a63ffcdec7cb6af5edcadb9ddfd5a3fa80feab82453c28f337aa6a899f36 not found: ID does not exist" Jan 05 21:55:45 crc kubenswrapper[5034]: E0105 21:55:45.833379 5034 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.156:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1887f4703882542b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 21:55:41.587788843 +0000 UTC m=+233.959788272,LastTimestamp:2026-01-05 21:55:41.587788843 +0000 UTC m=+233.959788272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 21:55:47 crc kubenswrapper[5034]: I0105 21:55:47.842873 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.428261 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" podUID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" containerName="oauth-openshift" containerID="cri-o://9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b" gracePeriod=15 Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.786110 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.786808 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.787375 5034 status_manager.go:851] "Failed to get status for pod" podUID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g2l2g\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.863874 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-session\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.863929 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd4dl\" (UniqueName: \"kubernetes.io/projected/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-kube-api-access-dd4dl\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.863972 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-router-certs\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.863992 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-policies\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864013 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-ocp-branding-template\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864029 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-idp-0-file-data\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864049 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-serving-cert\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864063 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-login\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864097 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-cliconfig\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864119 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-service-ca\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864145 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-error\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864172 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-dir\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864194 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-provider-selection\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864221 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-trusted-ca-bundle\") pod \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\" (UID: \"6db10d40-2f3c-44aa-a116-8f5ffa8577cd\") " Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.864875 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.865464 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.866034 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.866561 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.866774 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.870754 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.871067 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-kube-api-access-dd4dl" (OuterVolumeSpecName: "kube-api-access-dd4dl") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "kube-api-access-dd4dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.871127 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.871461 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.871826 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.871960 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.872042 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.872162 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.876886 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6db10d40-2f3c-44aa-a116-8f5ffa8577cd" (UID: "6db10d40-2f3c-44aa-a116-8f5ffa8577cd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:50 crc kubenswrapper[5034]: E0105 21:55:50.934199 5034 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:50 crc kubenswrapper[5034]: E0105 21:55:50.934765 5034 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:50 crc kubenswrapper[5034]: E0105 21:55:50.935306 5034 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:50 crc kubenswrapper[5034]: E0105 21:55:50.935556 5034 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:50 crc kubenswrapper[5034]: E0105 21:55:50.935856 5034 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.935970 5034 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 05 21:55:50 crc kubenswrapper[5034]: E0105 21:55:50.936316 5034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="200ms" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.965423 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.965893 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd4dl\" (UniqueName: \"kubernetes.io/projected/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-kube-api-access-dd4dl\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.965978 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966046 5034 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966132 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966194 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966249 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966311 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966366 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966420 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966473 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966528 5034 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966598 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:50 crc kubenswrapper[5034]: I0105 21:55:50.966660 5034 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6db10d40-2f3c-44aa-a116-8f5ffa8577cd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:51 crc kubenswrapper[5034]: E0105 21:55:51.137406 5034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="400ms" Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.159777 5034 generic.go:334] "Generic (PLEG): container finished" podID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" containerID="9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b" exitCode=0 Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.159863 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.159869 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" event={"ID":"6db10d40-2f3c-44aa-a116-8f5ffa8577cd","Type":"ContainerDied","Data":"9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b"} Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.160292 5034 scope.go:117] "RemoveContainer" containerID="9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b" Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.160262 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" event={"ID":"6db10d40-2f3c-44aa-a116-8f5ffa8577cd","Type":"ContainerDied","Data":"95dfbc7feebd24178b7574e09efca50c4fa462ff11dd4ea326533606ef69d453"} Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.160924 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.161224 5034 status_manager.go:851] "Failed to get status for pod" podUID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g2l2g\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.175388 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.175820 5034 status_manager.go:851] "Failed to get status for pod" podUID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g2l2g\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.181470 5034 scope.go:117] "RemoveContainer" containerID="9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b" Jan 05 21:55:51 crc kubenswrapper[5034]: E0105 21:55:51.182464 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b\": container with ID starting with 9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b not found: ID does not exist" containerID="9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b" Jan 05 21:55:51 crc kubenswrapper[5034]: I0105 21:55:51.182520 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b"} err="failed to get container status \"9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b\": rpc error: code = NotFound desc = could not find container \"9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b\": container with ID starting with 9b5ba31f94c6b5324c3d0ad9b7f088ddbc9c6c727cc4e1ce714466a6f164de8b not found: ID does not exist" Jan 05 21:55:51 crc kubenswrapper[5034]: E0105 21:55:51.538030 5034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="800ms" Jan 05 21:55:52 crc kubenswrapper[5034]: E0105 21:55:52.338667 5034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="1.6s" Jan 05 21:55:53 crc kubenswrapper[5034]: I0105 21:55:53.837888 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:53 crc kubenswrapper[5034]: I0105 21:55:53.839132 5034 status_manager.go:851] "Failed to get status for pod" podUID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g2l2g\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:53 crc kubenswrapper[5034]: I0105 21:55:53.839725 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:53 crc kubenswrapper[5034]: I0105 21:55:53.851237 5034 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="03bfa87a-54ff-4b62-93fc-cd9081c177e1" Jan 05 21:55:53 crc kubenswrapper[5034]: I0105 21:55:53.851278 5034 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="03bfa87a-54ff-4b62-93fc-cd9081c177e1" Jan 05 21:55:53 crc kubenswrapper[5034]: E0105 21:55:53.851869 5034 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:53 crc kubenswrapper[5034]: I0105 21:55:53.852759 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:53 crc kubenswrapper[5034]: W0105 21:55:53.870949 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-58e87a39cd83f95ddbbbeef817c21514596d99791cf87a4dcd922a1b39d18ba3 WatchSource:0}: Error finding container 58e87a39cd83f95ddbbbeef817c21514596d99791cf87a4dcd922a1b39d18ba3: Status 404 returned error can't find the container with id 58e87a39cd83f95ddbbbeef817c21514596d99791cf87a4dcd922a1b39d18ba3 Jan 05 21:55:53 crc kubenswrapper[5034]: E0105 21:55:53.940515 5034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="3.2s" Jan 05 21:55:54 crc kubenswrapper[5034]: I0105 21:55:54.181911 5034 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="efc8a09590dc1e281e077f584eb492747bfce53a66a00110db27c724b36de0b8" exitCode=0 Jan 05 21:55:54 crc kubenswrapper[5034]: I0105 21:55:54.182051 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"efc8a09590dc1e281e077f584eb492747bfce53a66a00110db27c724b36de0b8"} Jan 05 21:55:54 crc kubenswrapper[5034]: I0105 21:55:54.182249 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"58e87a39cd83f95ddbbbeef817c21514596d99791cf87a4dcd922a1b39d18ba3"} Jan 05 21:55:54 crc kubenswrapper[5034]: I0105 21:55:54.182552 5034 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="03bfa87a-54ff-4b62-93fc-cd9081c177e1" Jan 05 21:55:54 crc kubenswrapper[5034]: I0105 21:55:54.182587 5034 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="03bfa87a-54ff-4b62-93fc-cd9081c177e1" Jan 05 21:55:54 crc kubenswrapper[5034]: E0105 21:55:54.183052 5034 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:54 crc kubenswrapper[5034]: I0105 21:55:54.183206 5034 status_manager.go:851] "Failed to get status for pod" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:54 crc kubenswrapper[5034]: I0105 21:55:54.183599 5034 status_manager.go:851] "Failed to get status for pod" podUID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" pod="openshift-authentication/oauth-openshift-558db77b4-g2l2g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g2l2g\": dial tcp 38.102.83.156:6443: connect: connection refused" Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.199316 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"409fde22257295322e2654683b962f5b2013c6b94c78059c5a7a24ad6aa9c96c"} Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.199631 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c9d31fd9820c6c4c5a4f030d47011d540d90df0a1f89633886d757a8a0d0b336"} Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.199643 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0f3288aab868025f483989e34c9a33aff637c3da1282161265401d4f86d2975d"} Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.199653 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c167ffa44c10d4843d9bf32fdafdfd5aafb8a4f7146e4b753db168959c5753d8"} Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.199664 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7b1a57888bbd49154eb5118a8adf315b61856f7096372d89a975e7fd16654a84"} Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.200035 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.200255 5034 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="03bfa87a-54ff-4b62-93fc-cd9081c177e1" Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.200284 5034 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="03bfa87a-54ff-4b62-93fc-cd9081c177e1" Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.203554 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.203590 5034 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21" exitCode=1 Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.203608 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21"} Jan 05 21:55:55 crc kubenswrapper[5034]: I0105 21:55:55.203962 5034 scope.go:117] "RemoveContainer" containerID="8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21" Jan 05 21:55:56 crc kubenswrapper[5034]: I0105 21:55:56.212394 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 05 21:55:56 crc kubenswrapper[5034]: I0105 21:55:56.212468 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"45c72b02538b04b74f5d643b7964ba0c4c123ff59358c91275c4ed6e89088a93"} Jan 05 21:55:56 crc kubenswrapper[5034]: I0105 21:55:56.531841 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:55:56 crc kubenswrapper[5034]: I0105 21:55:56.532117 5034 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:55:56 crc kubenswrapper[5034]: I0105 21:55:56.532160 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:55:58 crc kubenswrapper[5034]: I0105 21:55:58.176794 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:55:58 crc kubenswrapper[5034]: I0105 21:55:58.853562 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:58 crc kubenswrapper[5034]: I0105 21:55:58.853597 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:55:58 crc kubenswrapper[5034]: I0105 21:55:58.857847 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:56:00 crc kubenswrapper[5034]: I0105 21:56:00.858217 5034 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:56:00 crc kubenswrapper[5034]: I0105 21:56:00.946402 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5d3a8e18-af7f-464f-9488-45e51fb0956b" Jan 05 21:56:01 crc kubenswrapper[5034]: I0105 21:56:01.240113 5034 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="03bfa87a-54ff-4b62-93fc-cd9081c177e1" Jan 05 21:56:01 crc kubenswrapper[5034]: I0105 21:56:01.240143 5034 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="03bfa87a-54ff-4b62-93fc-cd9081c177e1" Jan 05 21:56:01 crc kubenswrapper[5034]: I0105 21:56:01.244038 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:56:01 crc kubenswrapper[5034]: I0105 21:56:01.244091 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5d3a8e18-af7f-464f-9488-45e51fb0956b" Jan 05 21:56:02 crc kubenswrapper[5034]: I0105 21:56:02.246009 5034 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="03bfa87a-54ff-4b62-93fc-cd9081c177e1" Jan 05 21:56:02 crc kubenswrapper[5034]: I0105 21:56:02.246391 5034 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="03bfa87a-54ff-4b62-93fc-cd9081c177e1" Jan 05 21:56:02 crc kubenswrapper[5034]: I0105 21:56:02.249135 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5d3a8e18-af7f-464f-9488-45e51fb0956b" Jan 05 21:56:06 crc kubenswrapper[5034]: I0105 21:56:06.531944 5034 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:56:06 crc kubenswrapper[5034]: I0105 21:56:06.532317 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:56:10 crc kubenswrapper[5034]: I0105 21:56:10.738667 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 05 21:56:11 crc kubenswrapper[5034]: I0105 21:56:11.453533 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 05 21:56:11 crc kubenswrapper[5034]: I0105 21:56:11.965178 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 05 21:56:12 crc kubenswrapper[5034]: I0105 21:56:12.019020 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 05 21:56:12 crc kubenswrapper[5034]: I0105 21:56:12.568844 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 05 21:56:12 crc kubenswrapper[5034]: I0105 21:56:12.576518 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 05 21:56:12 crc kubenswrapper[5034]: I0105 21:56:12.652116 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 05 21:56:12 crc kubenswrapper[5034]: I0105 21:56:12.666876 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 05 21:56:12 crc kubenswrapper[5034]: I0105 21:56:12.812306 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 05 21:56:13 crc kubenswrapper[5034]: I0105 21:56:13.342039 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 05 21:56:13 crc kubenswrapper[5034]: I0105 21:56:13.372604 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 05 21:56:13 crc kubenswrapper[5034]: I0105 21:56:13.392372 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 05 21:56:13 crc kubenswrapper[5034]: I0105 21:56:13.538010 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 05 21:56:13 crc kubenswrapper[5034]: I0105 21:56:13.571003 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 05 21:56:13 crc kubenswrapper[5034]: I0105 21:56:13.891659 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 05 21:56:13 crc kubenswrapper[5034]: I0105 21:56:13.930836 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 05 21:56:14 crc kubenswrapper[5034]: I0105 21:56:14.183124 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 05 21:56:14 crc kubenswrapper[5034]: I0105 21:56:14.206966 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 05 21:56:14 crc kubenswrapper[5034]: I0105 21:56:14.296565 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 05 21:56:14 crc kubenswrapper[5034]: I0105 21:56:14.420336 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 05 21:56:14 crc kubenswrapper[5034]: I0105 21:56:14.589371 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 05 21:56:14 crc kubenswrapper[5034]: I0105 21:56:14.679427 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 05 21:56:14 crc kubenswrapper[5034]: I0105 21:56:14.775566 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.036192 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.155647 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.163396 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.170155 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.238521 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.298486 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.335266 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.422347 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.471239 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.495701 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.555121 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.589270 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.758519 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 05 21:56:15 crc kubenswrapper[5034]: I0105 21:56:15.759047 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.250121 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.256020 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.313746 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.377815 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.501648 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.531929 5034 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.532008 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.532062 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.532660 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"45c72b02538b04b74f5d643b7964ba0c4c123ff59358c91275c4ed6e89088a93"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.532765 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://45c72b02538b04b74f5d643b7964ba0c4c123ff59358c91275c4ed6e89088a93" gracePeriod=30 Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.533419 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.696885 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.709570 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.839784 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.903932 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.923055 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.976886 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 05 21:56:16 crc kubenswrapper[5034]: I0105 21:56:16.996443 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.129943 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.136400 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.150422 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.191837 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.241216 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.257270 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.280161 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.366045 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.414033 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.431280 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.439421 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.720687 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.788821 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.882988 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.901854 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.956281 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.992251 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 05 21:56:17 crc kubenswrapper[5034]: I0105 21:56:17.994445 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 05 21:56:18 crc kubenswrapper[5034]: I0105 21:56:18.012440 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 05 21:56:18 crc kubenswrapper[5034]: I0105 21:56:18.016087 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 05 21:56:18 crc kubenswrapper[5034]: I0105 21:56:18.097524 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 05 21:56:18 crc kubenswrapper[5034]: I0105 21:56:18.111489 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 05 21:56:18 crc kubenswrapper[5034]: I0105 21:56:18.293197 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 05 21:56:18 crc kubenswrapper[5034]: I0105 21:56:18.342175 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 05 21:56:18 crc kubenswrapper[5034]: I0105 21:56:18.413249 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 05 21:56:18 crc kubenswrapper[5034]: I0105 21:56:18.681697 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 05 21:56:18 crc kubenswrapper[5034]: I0105 21:56:18.996452 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.102809 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.114171 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.120388 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.157842 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.247010 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.356132 5034 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.367262 5034 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.449862 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.564414 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.679177 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.812629 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.877261 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.904133 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.924027 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 05 21:56:19 crc kubenswrapper[5034]: I0105 21:56:19.975340 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.013641 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.065427 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.072418 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.085117 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.133339 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.279668 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.339817 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.370038 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.423338 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.429875 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.486070 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.517794 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.529071 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.576218 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.584172 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.751122 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.806588 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.871619 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.927932 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 05 21:56:20 crc kubenswrapper[5034]: I0105 21:56:20.965946 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.020727 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.034956 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.068898 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.087365 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.177915 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.227680 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.310786 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.385812 5034 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.429578 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.509288 5034 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.593657 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.664306 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.718016 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.747438 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.769465 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.795597 5034 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.802005 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.802254 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.805391 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g2l2g","openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.805456 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.810519 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.813704 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.844246 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.84421407 podStartE2EDuration="21.84421407s" podCreationTimestamp="2026-01-05 21:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:56:21.825438756 +0000 UTC m=+274.197438195" watchObservedRunningTime="2026-01-05 21:56:21.84421407 +0000 UTC m=+274.216213509" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.846788 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" path="/var/lib/kubelet/pods/6db10d40-2f3c-44aa-a116-8f5ffa8577cd/volumes" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.947765 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 05 21:56:21 crc kubenswrapper[5034]: I0105 21:56:21.985240 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.041936 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.077876 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.122622 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.218859 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.270365 5034 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.270595 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6aa32773c1b5728d7780961ca05c9d3e9ecb8586b931ba1fbe382481aed55397" gracePeriod=5 Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.279316 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.279480 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.406465 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.422342 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.451316 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.451397 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.524816 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.543941 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.579939 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.587001 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.709396 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.941545 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.953589 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 05 21:56:22 crc kubenswrapper[5034]: I0105 21:56:22.995884 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.024399 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.041195 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.048860 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.070838 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.097393 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.121130 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.131482 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.181257 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.200033 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.209764 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.210461 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.228704 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.318235 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.362812 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.510273 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.563693 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.605377 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.609296 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.627822 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.637784 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.716622 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.789764 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.794133 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.801180 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.893862 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.909861 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.935978 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.954046 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 05 21:56:23 crc kubenswrapper[5034]: I0105 21:56:23.960813 5034 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.058229 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.101325 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.116127 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.127864 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.246875 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.273837 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.273964 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.283582 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.284113 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.324716 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.341498 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.367748 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.371254 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.375154 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.416172 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.456356 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.509964 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.536128 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.543227 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.641550 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.651656 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.674939 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.723579 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.728133 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.779152 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.887874 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.979116 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 05 21:56:24 crc kubenswrapper[5034]: I0105 21:56:24.998567 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.037744 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.109015 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.112487 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.144381 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.176152 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.195507 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.309792 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.447538 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.484008 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.485990 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.535906 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.602486 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.826288 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b9699fff8-vjgj7"] Jan 05 21:56:25 crc kubenswrapper[5034]: E0105 21:56:25.826499 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" containerName="oauth-openshift" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.826511 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" containerName="oauth-openshift" Jan 05 21:56:25 crc kubenswrapper[5034]: E0105 21:56:25.826523 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" containerName="installer" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.826529 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" containerName="installer" Jan 05 21:56:25 crc kubenswrapper[5034]: E0105 21:56:25.826546 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.826553 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.826639 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db10d40-2f3c-44aa-a116-8f5ffa8577cd" containerName="oauth-openshift" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.826649 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.826659 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fed08d-c55d-4e3f-8940-a6af0cdd5f77" containerName="installer" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.826999 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.832688 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.833240 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.834065 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.834345 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.834440 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.834510 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.834675 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.834863 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.834906 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.836843 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.837250 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.845592 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.848867 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.852737 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b9699fff8-vjgj7"] Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.855125 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.860129 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.880535 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.957987 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.993977 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cee7826-569a-44f1-b95e-578bc7407569-audit-dir\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994019 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-session\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994063 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994104 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994121 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-audit-policies\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994140 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994179 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-template-error\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994203 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994234 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994259 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994286 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-template-login\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994305 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994326 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:25 crc kubenswrapper[5034]: I0105 21:56:25.994359 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6g7z\" (UniqueName: \"kubernetes.io/projected/1cee7826-569a-44f1-b95e-578bc7407569-kube-api-access-t6g7z\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.051617 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095374 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6g7z\" (UniqueName: \"kubernetes.io/projected/1cee7826-569a-44f1-b95e-578bc7407569-kube-api-access-t6g7z\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095447 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cee7826-569a-44f1-b95e-578bc7407569-audit-dir\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095468 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-session\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095504 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095527 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095547 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-audit-policies\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095571 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095612 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-template-error\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095636 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095662 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095679 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095696 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-template-login\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095712 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.095728 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.096231 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cee7826-569a-44f1-b95e-578bc7407569-audit-dir\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.096839 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.097701 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.098472 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-audit-policies\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.099043 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.101429 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-session\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.101871 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.102534 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.102982 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.102996 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.103042 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.115379 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6g7z\" (UniqueName: \"kubernetes.io/projected/1cee7826-569a-44f1-b95e-578bc7407569-kube-api-access-t6g7z\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.118363 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-template-error\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.119698 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1cee7826-569a-44f1-b95e-578bc7407569-v4-0-config-user-template-login\") pod \"oauth-openshift-6b9699fff8-vjgj7\" (UID: \"1cee7826-569a-44f1-b95e-578bc7407569\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.145657 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.294988 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.406498 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.504133 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.554486 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b9699fff8-vjgj7"] Jan 05 21:56:26 crc kubenswrapper[5034]: I0105 21:56:26.971478 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.036136 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.251910 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.291158 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.381480 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.381539 5034 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6aa32773c1b5728d7780961ca05c9d3e9ecb8586b931ba1fbe382481aed55397" exitCode=137 Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.382977 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" event={"ID":"1cee7826-569a-44f1-b95e-578bc7407569","Type":"ContainerStarted","Data":"6056b22471e60323ff479e26e116d0b37878cc6bc4484ca9efe65cee083433a6"} Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.383016 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" event={"ID":"1cee7826-569a-44f1-b95e-578bc7407569","Type":"ContainerStarted","Data":"738b2517188f6e019a1f00e3de3a8a2fd6403c74681305b6c53b16890acde258"} Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.383248 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.449623 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" podStartSLOduration=62.449606073 podStartE2EDuration="1m2.449606073s" podCreationTimestamp="2026-01-05 21:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:56:27.448692107 +0000 UTC m=+279.820691546" watchObservedRunningTime="2026-01-05 21:56:27.449606073 +0000 UTC m=+279.821605512" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.471704 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b9699fff8-vjgj7" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.547246 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.616375 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.834020 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.834100 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.919571 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.928679 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.928854 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.928895 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.928933 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.928980 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.929165 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.929190 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.929189 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.929223 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.929619 5034 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.929638 5034 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.929651 5034 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.929666 5034 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 05 21:56:27 crc kubenswrapper[5034]: I0105 21:56:27.939167 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:56:28 crc kubenswrapper[5034]: I0105 21:56:28.030549 5034 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:56:28 crc kubenswrapper[5034]: I0105 21:56:28.117349 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 05 21:56:28 crc kubenswrapper[5034]: I0105 21:56:28.138674 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 05 21:56:28 crc kubenswrapper[5034]: I0105 21:56:28.388332 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 05 21:56:28 crc kubenswrapper[5034]: I0105 21:56:28.388457 5034 scope.go:117] "RemoveContainer" containerID="6aa32773c1b5728d7780961ca05c9d3e9ecb8586b931ba1fbe382481aed55397" Jan 05 21:56:28 crc kubenswrapper[5034]: I0105 21:56:28.388470 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:56:29 crc kubenswrapper[5034]: I0105 21:56:29.290672 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 05 21:56:29 crc kubenswrapper[5034]: I0105 21:56:29.844125 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 05 21:56:31 crc kubenswrapper[5034]: I0105 21:56:31.300778 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 05 21:56:47 crc kubenswrapper[5034]: I0105 21:56:47.485486 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 05 21:56:47 crc kubenswrapper[5034]: I0105 21:56:47.487351 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 05 21:56:47 crc kubenswrapper[5034]: I0105 21:56:47.487394 5034 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="45c72b02538b04b74f5d643b7964ba0c4c123ff59358c91275c4ed6e89088a93" exitCode=137 Jan 05 21:56:47 crc kubenswrapper[5034]: I0105 21:56:47.487421 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"45c72b02538b04b74f5d643b7964ba0c4c123ff59358c91275c4ed6e89088a93"} Jan 05 21:56:47 crc kubenswrapper[5034]: I0105 21:56:47.487450 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"16192253b27c56fe11a30f4a5e7b81afc153c9548f0a9c69bdc443a3e09557b1"} Jan 05 21:56:47 crc kubenswrapper[5034]: I0105 21:56:47.487465 5034 scope.go:117] "RemoveContainer" containerID="8917715a2ec32ea544e990ff657788a0afb365030c18111fd8e276468378ef21" Jan 05 21:56:47 crc kubenswrapper[5034]: I0105 21:56:47.734267 5034 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 05 21:56:48 crc kubenswrapper[5034]: I0105 21:56:48.176712 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:56:48 crc kubenswrapper[5034]: I0105 21:56:48.493366 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 05 21:56:56 crc kubenswrapper[5034]: I0105 21:56:56.531802 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:56:56 crc kubenswrapper[5034]: I0105 21:56:56.537560 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:56:57 crc kubenswrapper[5034]: I0105 21:56:57.543572 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:57:15 crc kubenswrapper[5034]: I0105 21:57:15.546788 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdbd6"] Jan 05 21:57:15 crc kubenswrapper[5034]: I0105 21:57:15.548274 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" podUID="3880fa85-26b0-4ed9-9b69-fe57b8c01092" containerName="controller-manager" containerID="cri-o://e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27" gracePeriod=30 Jan 05 21:57:15 crc kubenswrapper[5034]: I0105 21:57:15.550215 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq"] Jan 05 21:57:15 crc kubenswrapper[5034]: I0105 21:57:15.550474 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" podUID="143b2828-1125-4598-8d3a-44fdc8023b73" containerName="route-controller-manager" containerID="cri-o://5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6" gracePeriod=30 Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.103050 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.162669 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.171755 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-client-ca\") pod \"143b2828-1125-4598-8d3a-44fdc8023b73\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.171807 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-config\") pod \"143b2828-1125-4598-8d3a-44fdc8023b73\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.171856 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143b2828-1125-4598-8d3a-44fdc8023b73-serving-cert\") pod \"143b2828-1125-4598-8d3a-44fdc8023b73\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.171903 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glzsf\" (UniqueName: \"kubernetes.io/projected/143b2828-1125-4598-8d3a-44fdc8023b73-kube-api-access-glzsf\") pod \"143b2828-1125-4598-8d3a-44fdc8023b73\" (UID: \"143b2828-1125-4598-8d3a-44fdc8023b73\") " Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.173914 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-config" (OuterVolumeSpecName: "config") pod "143b2828-1125-4598-8d3a-44fdc8023b73" (UID: "143b2828-1125-4598-8d3a-44fdc8023b73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.174442 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-client-ca" (OuterVolumeSpecName: "client-ca") pod "143b2828-1125-4598-8d3a-44fdc8023b73" (UID: "143b2828-1125-4598-8d3a-44fdc8023b73"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.178416 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143b2828-1125-4598-8d3a-44fdc8023b73-kube-api-access-glzsf" (OuterVolumeSpecName: "kube-api-access-glzsf") pod "143b2828-1125-4598-8d3a-44fdc8023b73" (UID: "143b2828-1125-4598-8d3a-44fdc8023b73"). InnerVolumeSpecName "kube-api-access-glzsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.179621 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143b2828-1125-4598-8d3a-44fdc8023b73-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "143b2828-1125-4598-8d3a-44fdc8023b73" (UID: "143b2828-1125-4598-8d3a-44fdc8023b73"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.272821 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-proxy-ca-bundles\") pod \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.272928 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3880fa85-26b0-4ed9-9b69-fe57b8c01092-serving-cert\") pod \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.272982 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-client-ca\") pod \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.273032 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzp47\" (UniqueName: \"kubernetes.io/projected/3880fa85-26b0-4ed9-9b69-fe57b8c01092-kube-api-access-dzp47\") pod \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.273077 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-config\") pod \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\" (UID: \"3880fa85-26b0-4ed9-9b69-fe57b8c01092\") " Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.273308 5034 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.273319 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143b2828-1125-4598-8d3a-44fdc8023b73-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.273328 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143b2828-1125-4598-8d3a-44fdc8023b73-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.273337 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glzsf\" (UniqueName: \"kubernetes.io/projected/143b2828-1125-4598-8d3a-44fdc8023b73-kube-api-access-glzsf\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.273894 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3880fa85-26b0-4ed9-9b69-fe57b8c01092" (UID: "3880fa85-26b0-4ed9-9b69-fe57b8c01092"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.273949 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-client-ca" (OuterVolumeSpecName: "client-ca") pod "3880fa85-26b0-4ed9-9b69-fe57b8c01092" (UID: "3880fa85-26b0-4ed9-9b69-fe57b8c01092"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.274319 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-config" (OuterVolumeSpecName: "config") pod "3880fa85-26b0-4ed9-9b69-fe57b8c01092" (UID: "3880fa85-26b0-4ed9-9b69-fe57b8c01092"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.277597 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3880fa85-26b0-4ed9-9b69-fe57b8c01092-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3880fa85-26b0-4ed9-9b69-fe57b8c01092" (UID: "3880fa85-26b0-4ed9-9b69-fe57b8c01092"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.277724 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3880fa85-26b0-4ed9-9b69-fe57b8c01092-kube-api-access-dzp47" (OuterVolumeSpecName: "kube-api-access-dzp47") pod "3880fa85-26b0-4ed9-9b69-fe57b8c01092" (UID: "3880fa85-26b0-4ed9-9b69-fe57b8c01092"). InnerVolumeSpecName "kube-api-access-dzp47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.375298 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzp47\" (UniqueName: \"kubernetes.io/projected/3880fa85-26b0-4ed9-9b69-fe57b8c01092-kube-api-access-dzp47\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.375434 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.375448 5034 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3880fa85-26b0-4ed9-9b69-fe57b8c01092-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.375460 5034 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.375468 5034 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3880fa85-26b0-4ed9-9b69-fe57b8c01092-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.641053 5034 generic.go:334] "Generic (PLEG): container finished" podID="143b2828-1125-4598-8d3a-44fdc8023b73" containerID="5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6" exitCode=0 Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.641144 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.641166 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" event={"ID":"143b2828-1125-4598-8d3a-44fdc8023b73","Type":"ContainerDied","Data":"5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6"} Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.641734 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq" event={"ID":"143b2828-1125-4598-8d3a-44fdc8023b73","Type":"ContainerDied","Data":"b29a3cf1a6638e4cc664970fd3520fe07a3bbf92ceab62d659d984ebe54a1658"} Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.641755 5034 scope.go:117] "RemoveContainer" containerID="5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.643689 5034 generic.go:334] "Generic (PLEG): container finished" podID="3880fa85-26b0-4ed9-9b69-fe57b8c01092" containerID="e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27" exitCode=0 Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.643722 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" event={"ID":"3880fa85-26b0-4ed9-9b69-fe57b8c01092","Type":"ContainerDied","Data":"e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27"} Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.643740 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" event={"ID":"3880fa85-26b0-4ed9-9b69-fe57b8c01092","Type":"ContainerDied","Data":"cf4d81ca4e848b5e6dc0efb44b36f72624996412b351f209f7eba6fa120f549b"} Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.643810 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xdbd6" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.662529 5034 scope.go:117] "RemoveContainer" containerID="5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6" Jan 05 21:57:16 crc kubenswrapper[5034]: E0105 21:57:16.663009 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6\": container with ID starting with 5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6 not found: ID does not exist" containerID="5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.663053 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6"} err="failed to get container status \"5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6\": rpc error: code = NotFound desc = could not find container \"5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6\": container with ID starting with 5e09fb2b6be0306a26063452bd3d8fe48b24e3d23f37a11374e41275e781fae6 not found: ID does not exist" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.663092 5034 scope.go:117] "RemoveContainer" containerID="e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.676956 5034 scope.go:117] "RemoveContainer" containerID="e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.677845 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq"] Jan 05 21:57:16 crc kubenswrapper[5034]: E0105 21:57:16.681016 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27\": container with ID starting with e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27 not found: ID does not exist" containerID="e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.681068 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27"} err="failed to get container status \"e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27\": rpc error: code = NotFound desc = could not find container \"e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27\": container with ID starting with e134acb40541a8f4fc0cbc644aaaee075aceaeb4c77cba0106b299a20492db27 not found: ID does not exist" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.685800 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c9bmq"] Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.701675 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdbd6"] Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.708361 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xdbd6"] Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.850457 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2"] Jan 05 21:57:16 crc kubenswrapper[5034]: E0105 21:57:16.850748 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3880fa85-26b0-4ed9-9b69-fe57b8c01092" containerName="controller-manager" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.850771 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3880fa85-26b0-4ed9-9b69-fe57b8c01092" containerName="controller-manager" Jan 05 21:57:16 crc kubenswrapper[5034]: E0105 21:57:16.850801 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143b2828-1125-4598-8d3a-44fdc8023b73" containerName="route-controller-manager" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.850812 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="143b2828-1125-4598-8d3a-44fdc8023b73" containerName="route-controller-manager" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.850924 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="3880fa85-26b0-4ed9-9b69-fe57b8c01092" containerName="controller-manager" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.850945 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="143b2828-1125-4598-8d3a-44fdc8023b73" containerName="route-controller-manager" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.851435 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.853362 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.853596 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.853899 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.854229 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.854906 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-558b8949d5-l6lzg"] Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.855529 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.857671 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.858116 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.858714 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.858937 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.859160 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.859248 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.859283 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.859468 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.864793 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2"] Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.882696 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.884211 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34466172-c637-4271-81d3-5d2c6295ad15-config\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.884323 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34466172-c637-4271-81d3-5d2c6295ad15-proxy-ca-bundles\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.884375 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw4kb\" (UniqueName: \"kubernetes.io/projected/34466172-c637-4271-81d3-5d2c6295ad15-kube-api-access-fw4kb\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.884657 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be520c34-f871-499d-9650-0c80708dee95-client-ca\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.884723 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be520c34-f871-499d-9650-0c80708dee95-config\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.884839 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34466172-c637-4271-81d3-5d2c6295ad15-client-ca\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.884868 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34466172-c637-4271-81d3-5d2c6295ad15-serving-cert\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.884950 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-986nr\" (UniqueName: \"kubernetes.io/projected/be520c34-f871-499d-9650-0c80708dee95-kube-api-access-986nr\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.885011 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be520c34-f871-499d-9650-0c80708dee95-serving-cert\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.892833 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-558b8949d5-l6lzg"] Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.986496 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be520c34-f871-499d-9650-0c80708dee95-config\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.986560 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34466172-c637-4271-81d3-5d2c6295ad15-client-ca\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.986578 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34466172-c637-4271-81d3-5d2c6295ad15-serving-cert\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.986609 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-986nr\" (UniqueName: \"kubernetes.io/projected/be520c34-f871-499d-9650-0c80708dee95-kube-api-access-986nr\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.986631 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be520c34-f871-499d-9650-0c80708dee95-serving-cert\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.986662 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34466172-c637-4271-81d3-5d2c6295ad15-config\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.986686 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34466172-c637-4271-81d3-5d2c6295ad15-proxy-ca-bundles\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.986706 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw4kb\" (UniqueName: \"kubernetes.io/projected/34466172-c637-4271-81d3-5d2c6295ad15-kube-api-access-fw4kb\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.986728 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be520c34-f871-499d-9650-0c80708dee95-client-ca\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.987713 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be520c34-f871-499d-9650-0c80708dee95-client-ca\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.987870 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34466172-c637-4271-81d3-5d2c6295ad15-client-ca\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.987994 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be520c34-f871-499d-9650-0c80708dee95-config\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.988242 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34466172-c637-4271-81d3-5d2c6295ad15-proxy-ca-bundles\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.988532 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34466172-c637-4271-81d3-5d2c6295ad15-config\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.992932 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34466172-c637-4271-81d3-5d2c6295ad15-serving-cert\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:16 crc kubenswrapper[5034]: I0105 21:57:16.992964 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be520c34-f871-499d-9650-0c80708dee95-serving-cert\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.002752 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-986nr\" (UniqueName: \"kubernetes.io/projected/be520c34-f871-499d-9650-0c80708dee95-kube-api-access-986nr\") pod \"route-controller-manager-85f5679bcf-z54v2\" (UID: \"be520c34-f871-499d-9650-0c80708dee95\") " pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.003654 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw4kb\" (UniqueName: \"kubernetes.io/projected/34466172-c637-4271-81d3-5d2c6295ad15-kube-api-access-fw4kb\") pod \"controller-manager-558b8949d5-l6lzg\" (UID: \"34466172-c637-4271-81d3-5d2c6295ad15\") " pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.168932 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.174953 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.506952 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-558b8949d5-l6lzg"] Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.605012 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2"] Jan 05 21:57:17 crc kubenswrapper[5034]: W0105 21:57:17.612437 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe520c34_f871_499d_9650_0c80708dee95.slice/crio-396f51a1bb52a54c89143bbfd8a59dd776feb76a05cd0cb3016c3231928dd655 WatchSource:0}: Error finding container 396f51a1bb52a54c89143bbfd8a59dd776feb76a05cd0cb3016c3231928dd655: Status 404 returned error can't find the container with id 396f51a1bb52a54c89143bbfd8a59dd776feb76a05cd0cb3016c3231928dd655 Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.657241 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" event={"ID":"34466172-c637-4271-81d3-5d2c6295ad15","Type":"ContainerStarted","Data":"2e3debd583da8b36d0c25853dc112d3b242c0acd9fd90325ea5c2a42a0e61f9b"} Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.657651 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.657667 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" event={"ID":"34466172-c637-4271-81d3-5d2c6295ad15","Type":"ContainerStarted","Data":"0c325353434fccd53f1167b8fd1d9bda7bd7c683b398ec23bc2ded790563477e"} Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.661412 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" event={"ID":"be520c34-f871-499d-9650-0c80708dee95","Type":"ContainerStarted","Data":"396f51a1bb52a54c89143bbfd8a59dd776feb76a05cd0cb3016c3231928dd655"} Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.661446 5034 patch_prober.go:28] interesting pod/controller-manager-558b8949d5-l6lzg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.661520 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" podUID="34466172-c637-4271-81d3-5d2c6295ad15" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.681145 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" podStartSLOduration=2.681123387 podStartE2EDuration="2.681123387s" podCreationTimestamp="2026-01-05 21:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:57:17.679753318 +0000 UTC m=+330.051752757" watchObservedRunningTime="2026-01-05 21:57:17.681123387 +0000 UTC m=+330.053122826" Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.845620 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143b2828-1125-4598-8d3a-44fdc8023b73" path="/var/lib/kubelet/pods/143b2828-1125-4598-8d3a-44fdc8023b73/volumes" Jan 05 21:57:17 crc kubenswrapper[5034]: I0105 21:57:17.846287 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3880fa85-26b0-4ed9-9b69-fe57b8c01092" path="/var/lib/kubelet/pods/3880fa85-26b0-4ed9-9b69-fe57b8c01092/volumes" Jan 05 21:57:18 crc kubenswrapper[5034]: I0105 21:57:18.675010 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" event={"ID":"be520c34-f871-499d-9650-0c80708dee95","Type":"ContainerStarted","Data":"9eaee6aa8f2a486345c9e01b416d124058756ddbffb7972ed0301567159ee79d"} Jan 05 21:57:18 crc kubenswrapper[5034]: I0105 21:57:18.675067 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:18 crc kubenswrapper[5034]: I0105 21:57:18.679740 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" Jan 05 21:57:18 crc kubenswrapper[5034]: I0105 21:57:18.679913 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-558b8949d5-l6lzg" Jan 05 21:57:18 crc kubenswrapper[5034]: I0105 21:57:18.698265 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85f5679bcf-z54v2" podStartSLOduration=3.6982469890000003 podStartE2EDuration="3.698246989s" podCreationTimestamp="2026-01-05 21:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:57:18.696004125 +0000 UTC m=+331.068003564" watchObservedRunningTime="2026-01-05 21:57:18.698246989 +0000 UTC m=+331.070246428" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.368452 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlt9r"] Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.369738 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vlt9r" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerName="registry-server" containerID="cri-o://a9dceb956f364b938713ba50e524c6b2f74b5a715a9b4acadb60e76df5249167" gracePeriod=30 Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.378396 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rwss9"] Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.378892 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rwss9" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" containerName="registry-server" containerID="cri-o://04004fa19986610ab776dc5ebc4c5a9bacad7fc058652945fab91308898485bb" gracePeriod=30 Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.440624 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dd7xb"] Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.440865 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" podUID="af1871ae-05fe-4597-8bb9-e2525f739922" containerName="marketplace-operator" containerID="cri-o://dac27b9d7bd7b7be02943a041c732a823987efea64dd55370f4933332d5f586b" gracePeriod=30 Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.455443 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv6pz"] Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.455753 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kv6pz" podUID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerName="registry-server" containerID="cri-o://cd0fb59f175b15ec63cb57a410f11e32615a76adc3637f60813478acd48cf6a3" gracePeriod=30 Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.459209 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvbkt"] Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.459988 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.469586 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8mqz"] Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.470819 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l8mqz" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerName="registry-server" containerID="cri-o://a79f73a48e3ddfda18f2cfd00c0aa1e555cf6989381fec11b6e7d6e4d1cd560a" gracePeriod=30 Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.484661 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvbkt"] Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.596231 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29925def-614b-4b01-ad4f-056d5f252000-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wvbkt\" (UID: \"29925def-614b-4b01-ad4f-056d5f252000\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.596298 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29925def-614b-4b01-ad4f-056d5f252000-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wvbkt\" (UID: \"29925def-614b-4b01-ad4f-056d5f252000\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.596381 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2zq\" (UniqueName: \"kubernetes.io/projected/29925def-614b-4b01-ad4f-056d5f252000-kube-api-access-ql2zq\") pod \"marketplace-operator-79b997595-wvbkt\" (UID: \"29925def-614b-4b01-ad4f-056d5f252000\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.697335 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql2zq\" (UniqueName: \"kubernetes.io/projected/29925def-614b-4b01-ad4f-056d5f252000-kube-api-access-ql2zq\") pod \"marketplace-operator-79b997595-wvbkt\" (UID: \"29925def-614b-4b01-ad4f-056d5f252000\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.697403 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29925def-614b-4b01-ad4f-056d5f252000-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wvbkt\" (UID: \"29925def-614b-4b01-ad4f-056d5f252000\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.697445 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29925def-614b-4b01-ad4f-056d5f252000-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wvbkt\" (UID: \"29925def-614b-4b01-ad4f-056d5f252000\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.699127 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29925def-614b-4b01-ad4f-056d5f252000-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wvbkt\" (UID: \"29925def-614b-4b01-ad4f-056d5f252000\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.706032 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29925def-614b-4b01-ad4f-056d5f252000-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wvbkt\" (UID: \"29925def-614b-4b01-ad4f-056d5f252000\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.715456 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql2zq\" (UniqueName: \"kubernetes.io/projected/29925def-614b-4b01-ad4f-056d5f252000-kube-api-access-ql2zq\") pod \"marketplace-operator-79b997595-wvbkt\" (UID: \"29925def-614b-4b01-ad4f-056d5f252000\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.716550 5034 generic.go:334] "Generic (PLEG): container finished" podID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerID="a79f73a48e3ddfda18f2cfd00c0aa1e555cf6989381fec11b6e7d6e4d1cd560a" exitCode=0 Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.716634 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8mqz" event={"ID":"0aa39adf-fc5d-44bf-a491-0ff564bd864c","Type":"ContainerDied","Data":"a79f73a48e3ddfda18f2cfd00c0aa1e555cf6989381fec11b6e7d6e4d1cd560a"} Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.717994 5034 generic.go:334] "Generic (PLEG): container finished" podID="4b263441-0124-45fe-8cc0-14aa272246c3" containerID="04004fa19986610ab776dc5ebc4c5a9bacad7fc058652945fab91308898485bb" exitCode=0 Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.718027 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwss9" event={"ID":"4b263441-0124-45fe-8cc0-14aa272246c3","Type":"ContainerDied","Data":"04004fa19986610ab776dc5ebc4c5a9bacad7fc058652945fab91308898485bb"} Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.718613 5034 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dd7xb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.718675 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" podUID="af1871ae-05fe-4597-8bb9-e2525f739922" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.722961 5034 generic.go:334] "Generic (PLEG): container finished" podID="af1871ae-05fe-4597-8bb9-e2525f739922" containerID="dac27b9d7bd7b7be02943a041c732a823987efea64dd55370f4933332d5f586b" exitCode=0 Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.723008 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" event={"ID":"af1871ae-05fe-4597-8bb9-e2525f739922","Type":"ContainerDied","Data":"dac27b9d7bd7b7be02943a041c732a823987efea64dd55370f4933332d5f586b"} Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.725806 5034 generic.go:334] "Generic (PLEG): container finished" podID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerID="cd0fb59f175b15ec63cb57a410f11e32615a76adc3637f60813478acd48cf6a3" exitCode=0 Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.725875 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv6pz" event={"ID":"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7","Type":"ContainerDied","Data":"cd0fb59f175b15ec63cb57a410f11e32615a76adc3637f60813478acd48cf6a3"} Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.733225 5034 generic.go:334] "Generic (PLEG): container finished" podID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerID="a9dceb956f364b938713ba50e524c6b2f74b5a715a9b4acadb60e76df5249167" exitCode=0 Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.733273 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlt9r" event={"ID":"58104f59-4ae4-4e18-aa6a-6762a589e921","Type":"ContainerDied","Data":"a9dceb956f364b938713ba50e524c6b2f74b5a715a9b4acadb60e76df5249167"} Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.779626 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:26 crc kubenswrapper[5034]: I0105 21:57:26.982737 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.102406 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-catalog-content\") pod \"4b263441-0124-45fe-8cc0-14aa272246c3\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.102470 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9dbc\" (UniqueName: \"kubernetes.io/projected/4b263441-0124-45fe-8cc0-14aa272246c3-kube-api-access-z9dbc\") pod \"4b263441-0124-45fe-8cc0-14aa272246c3\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.102568 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-utilities\") pod \"4b263441-0124-45fe-8cc0-14aa272246c3\" (UID: \"4b263441-0124-45fe-8cc0-14aa272246c3\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.103567 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-utilities" (OuterVolumeSpecName: "utilities") pod "4b263441-0124-45fe-8cc0-14aa272246c3" (UID: "4b263441-0124-45fe-8cc0-14aa272246c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.108466 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b263441-0124-45fe-8cc0-14aa272246c3-kube-api-access-z9dbc" (OuterVolumeSpecName: "kube-api-access-z9dbc") pod "4b263441-0124-45fe-8cc0-14aa272246c3" (UID: "4b263441-0124-45fe-8cc0-14aa272246c3"). InnerVolumeSpecName "kube-api-access-z9dbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.160114 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b263441-0124-45fe-8cc0-14aa272246c3" (UID: "4b263441-0124-45fe-8cc0-14aa272246c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.204678 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.204718 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b263441-0124-45fe-8cc0-14aa272246c3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.204751 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9dbc\" (UniqueName: \"kubernetes.io/projected/4b263441-0124-45fe-8cc0-14aa272246c3-kube-api-access-z9dbc\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.224973 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.246148 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.247975 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.256455 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411553 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br6nm\" (UniqueName: \"kubernetes.io/projected/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-kube-api-access-br6nm\") pod \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411616 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-operator-metrics\") pod \"af1871ae-05fe-4597-8bb9-e2525f739922\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411652 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-utilities\") pod \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411747 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-catalog-content\") pod \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411782 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqz9l\" (UniqueName: \"kubernetes.io/projected/0aa39adf-fc5d-44bf-a491-0ff564bd864c-kube-api-access-lqz9l\") pod \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411803 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-utilities\") pod \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\" (UID: \"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411822 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v24kz\" (UniqueName: \"kubernetes.io/projected/af1871ae-05fe-4597-8bb9-e2525f739922-kube-api-access-v24kz\") pod \"af1871ae-05fe-4597-8bb9-e2525f739922\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411871 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-trusted-ca\") pod \"af1871ae-05fe-4597-8bb9-e2525f739922\" (UID: \"af1871ae-05fe-4597-8bb9-e2525f739922\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411905 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-catalog-content\") pod \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\" (UID: \"0aa39adf-fc5d-44bf-a491-0ff564bd864c\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411923 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn4hq\" (UniqueName: \"kubernetes.io/projected/58104f59-4ae4-4e18-aa6a-6762a589e921-kube-api-access-mn4hq\") pod \"58104f59-4ae4-4e18-aa6a-6762a589e921\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411946 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-utilities\") pod \"58104f59-4ae4-4e18-aa6a-6762a589e921\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.411981 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-catalog-content\") pod \"58104f59-4ae4-4e18-aa6a-6762a589e921\" (UID: \"58104f59-4ae4-4e18-aa6a-6762a589e921\") " Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.413549 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-utilities" (OuterVolumeSpecName: "utilities") pod "0aa39adf-fc5d-44bf-a491-0ff564bd864c" (UID: "0aa39adf-fc5d-44bf-a491-0ff564bd864c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.414201 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-utilities" (OuterVolumeSpecName: "utilities") pod "58104f59-4ae4-4e18-aa6a-6762a589e921" (UID: "58104f59-4ae4-4e18-aa6a-6762a589e921"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.414719 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "af1871ae-05fe-4597-8bb9-e2525f739922" (UID: "af1871ae-05fe-4597-8bb9-e2525f739922"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.415269 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-utilities" (OuterVolumeSpecName: "utilities") pod "6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" (UID: "6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.416335 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa39adf-fc5d-44bf-a491-0ff564bd864c-kube-api-access-lqz9l" (OuterVolumeSpecName: "kube-api-access-lqz9l") pod "0aa39adf-fc5d-44bf-a491-0ff564bd864c" (UID: "0aa39adf-fc5d-44bf-a491-0ff564bd864c"). InnerVolumeSpecName "kube-api-access-lqz9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.416780 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af1871ae-05fe-4597-8bb9-e2525f739922-kube-api-access-v24kz" (OuterVolumeSpecName: "kube-api-access-v24kz") pod "af1871ae-05fe-4597-8bb9-e2525f739922" (UID: "af1871ae-05fe-4597-8bb9-e2525f739922"). InnerVolumeSpecName "kube-api-access-v24kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.416880 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58104f59-4ae4-4e18-aa6a-6762a589e921-kube-api-access-mn4hq" (OuterVolumeSpecName: "kube-api-access-mn4hq") pod "58104f59-4ae4-4e18-aa6a-6762a589e921" (UID: "58104f59-4ae4-4e18-aa6a-6762a589e921"). InnerVolumeSpecName "kube-api-access-mn4hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.417134 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-kube-api-access-br6nm" (OuterVolumeSpecName: "kube-api-access-br6nm") pod "6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" (UID: "6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7"). InnerVolumeSpecName "kube-api-access-br6nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.419815 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "af1871ae-05fe-4597-8bb9-e2525f739922" (UID: "af1871ae-05fe-4597-8bb9-e2525f739922"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.442238 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" (UID: "6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.469737 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvbkt"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.472201 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58104f59-4ae4-4e18-aa6a-6762a589e921" (UID: "58104f59-4ae4-4e18-aa6a-6762a589e921"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513578 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513632 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br6nm\" (UniqueName: \"kubernetes.io/projected/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-kube-api-access-br6nm\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513658 5034 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513683 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513704 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513729 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqz9l\" (UniqueName: \"kubernetes.io/projected/0aa39adf-fc5d-44bf-a491-0ff564bd864c-kube-api-access-lqz9l\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513755 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513778 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v24kz\" (UniqueName: \"kubernetes.io/projected/af1871ae-05fe-4597-8bb9-e2525f739922-kube-api-access-v24kz\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513803 5034 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af1871ae-05fe-4597-8bb9-e2525f739922-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513829 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn4hq\" (UniqueName: \"kubernetes.io/projected/58104f59-4ae4-4e18-aa6a-6762a589e921-kube-api-access-mn4hq\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.513853 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58104f59-4ae4-4e18-aa6a-6762a589e921-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.552443 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0aa39adf-fc5d-44bf-a491-0ff564bd864c" (UID: "0aa39adf-fc5d-44bf-a491-0ff564bd864c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.614698 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aa39adf-fc5d-44bf-a491-0ff564bd864c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.741747 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8mqz" event={"ID":"0aa39adf-fc5d-44bf-a491-0ff564bd864c","Type":"ContainerDied","Data":"7f2a731841f27977d0b8e718fcebf8cb8b8f1e174faa70246157ffe4a074c2c1"} Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.741805 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8mqz" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.741831 5034 scope.go:117] "RemoveContainer" containerID="a79f73a48e3ddfda18f2cfd00c0aa1e555cf6989381fec11b6e7d6e4d1cd560a" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.744645 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" event={"ID":"29925def-614b-4b01-ad4f-056d5f252000","Type":"ContainerStarted","Data":"ea4e62822047b06e8c0d674461dab9ebe104c0bc922a19f8cc7c19e1a64f479e"} Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.744683 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" event={"ID":"29925def-614b-4b01-ad4f-056d5f252000","Type":"ContainerStarted","Data":"6cb9ef2819405c43e01f5c088fbf9359c6ee130971af06a12546105441d6d0b6"} Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.745346 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.747313 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwss9" event={"ID":"4b263441-0124-45fe-8cc0-14aa272246c3","Type":"ContainerDied","Data":"04958c24a98236b10b9e62fba4066fae73484ac8aae8ef6235f9fe734c3ef0bd"} Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.747337 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwss9" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.748044 5034 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wvbkt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" start-of-body= Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.748098 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" podUID="29925def-614b-4b01-ad4f-056d5f252000" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.751483 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.752162 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dd7xb" event={"ID":"af1871ae-05fe-4597-8bb9-e2525f739922","Type":"ContainerDied","Data":"eb690f4a33341301ed0fcae36c7dde38b3547499ac71a34421d8e79dd5d97ff2"} Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.763458 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv6pz" event={"ID":"6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7","Type":"ContainerDied","Data":"9281c9eec3198cfad1bdf8dc8d991920db916df66dc759916c612aac457d5574"} Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.763527 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv6pz" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.766323 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlt9r" event={"ID":"58104f59-4ae4-4e18-aa6a-6762a589e921","Type":"ContainerDied","Data":"5827c40c95ab0e5ea9a75f42f757f46c01eb31b8ddb439c793cc05f761c7a7f7"} Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.766461 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlt9r" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.768221 5034 scope.go:117] "RemoveContainer" containerID="f855c3d2a92d27ec9978a0c17631728cd9f9ddc9b1ecb369577fbe20ad78f7fa" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.785128 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" podStartSLOduration=1.7851089390000001 podStartE2EDuration="1.785108939s" podCreationTimestamp="2026-01-05 21:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:57:27.781769724 +0000 UTC m=+340.153769163" watchObservedRunningTime="2026-01-05 21:57:27.785108939 +0000 UTC m=+340.157108378" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.803800 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8mqz"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.806701 5034 scope.go:117] "RemoveContainer" containerID="3ea8d1643fa69aeeaad85378bab2d062e5df98b09b5a62c7e4daa26137b9f984" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.809833 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l8mqz"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.823318 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rwss9"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.828139 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rwss9"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.837898 5034 scope.go:117] "RemoveContainer" containerID="04004fa19986610ab776dc5ebc4c5a9bacad7fc058652945fab91308898485bb" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.852798 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" path="/var/lib/kubelet/pods/0aa39adf-fc5d-44bf-a491-0ff564bd864c/volumes" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.853569 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" path="/var/lib/kubelet/pods/4b263441-0124-45fe-8cc0-14aa272246c3/volumes" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.854272 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dd7xb"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.854312 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dd7xb"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.854338 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlt9r"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.854350 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vlt9r"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.862420 5034 scope.go:117] "RemoveContainer" containerID="abd82277f8180cd4676f5f58d583adba5c584f9436331406086634138163f579" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.863374 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv6pz"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.867360 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv6pz"] Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.878451 5034 scope.go:117] "RemoveContainer" containerID="9ae5a40e9bd330f99a6d6c54f3fe36ff41e290d8ad029945c17c8126d960a638" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.893444 5034 scope.go:117] "RemoveContainer" containerID="dac27b9d7bd7b7be02943a041c732a823987efea64dd55370f4933332d5f586b" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.910284 5034 scope.go:117] "RemoveContainer" containerID="cd0fb59f175b15ec63cb57a410f11e32615a76adc3637f60813478acd48cf6a3" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.927802 5034 scope.go:117] "RemoveContainer" containerID="2c8f5728b58fc06f77e3683f25da8bac68dbe494d2293be4458cf866bac41f04" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.945779 5034 scope.go:117] "RemoveContainer" containerID="629a86b63be6144d32d28e0f2452ad6823b99e81b6794bda0b1215d3a7c2d91b" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.964552 5034 scope.go:117] "RemoveContainer" containerID="a9dceb956f364b938713ba50e524c6b2f74b5a715a9b4acadb60e76df5249167" Jan 05 21:57:27 crc kubenswrapper[5034]: I0105 21:57:27.989966 5034 scope.go:117] "RemoveContainer" containerID="6b6275b08a9c04f305b3b36071c2fd298c7c1bd96af02daba03b4b0bea7b576d" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.005161 5034 scope.go:117] "RemoveContainer" containerID="edf41e654213713ba3870d3500d2f14ff88ff3513f1f92e55836eb017ac48f00" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.585588 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-28ptj"] Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586026 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerName="extract-content" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586037 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerName="extract-content" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586047 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586053 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586064 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" containerName="extract-content" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586070 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" containerName="extract-content" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586091 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586097 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586109 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerName="extract-utilities" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586114 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerName="extract-utilities" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586123 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586128 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586135 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerName="extract-content" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586140 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerName="extract-content" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586146 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1871ae-05fe-4597-8bb9-e2525f739922" containerName="marketplace-operator" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586152 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1871ae-05fe-4597-8bb9-e2525f739922" containerName="marketplace-operator" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586160 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerName="extract-utilities" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586165 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerName="extract-utilities" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586173 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586178 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586187 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerName="extract-content" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586192 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerName="extract-content" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586199 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerName="extract-utilities" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586205 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerName="extract-utilities" Jan 05 21:57:28 crc kubenswrapper[5034]: E0105 21:57:28.586213 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" containerName="extract-utilities" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586219 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" containerName="extract-utilities" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586310 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa39adf-fc5d-44bf-a491-0ff564bd864c" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586328 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586338 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="af1871ae-05fe-4597-8bb9-e2525f739922" containerName="marketplace-operator" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586348 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b263441-0124-45fe-8cc0-14aa272246c3" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.586356 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" containerName="registry-server" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.587065 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.589233 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.601481 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28ptj"] Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.730491 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d105868-804c-47f7-a59c-d289cf852378-utilities\") pod \"redhat-marketplace-28ptj\" (UID: \"1d105868-804c-47f7-a59c-d289cf852378\") " pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.730553 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vgbd\" (UniqueName: \"kubernetes.io/projected/1d105868-804c-47f7-a59c-d289cf852378-kube-api-access-6vgbd\") pod \"redhat-marketplace-28ptj\" (UID: \"1d105868-804c-47f7-a59c-d289cf852378\") " pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.730614 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d105868-804c-47f7-a59c-d289cf852378-catalog-content\") pod \"redhat-marketplace-28ptj\" (UID: \"1d105868-804c-47f7-a59c-d289cf852378\") " pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.779885 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wvbkt" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.796021 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9zvx5"] Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.799516 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.818402 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.833175 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d105868-804c-47f7-a59c-d289cf852378-catalog-content\") pod \"redhat-marketplace-28ptj\" (UID: \"1d105868-804c-47f7-a59c-d289cf852378\") " pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.833243 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d105868-804c-47f7-a59c-d289cf852378-utilities\") pod \"redhat-marketplace-28ptj\" (UID: \"1d105868-804c-47f7-a59c-d289cf852378\") " pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.833305 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vgbd\" (UniqueName: \"kubernetes.io/projected/1d105868-804c-47f7-a59c-d289cf852378-kube-api-access-6vgbd\") pod \"redhat-marketplace-28ptj\" (UID: \"1d105868-804c-47f7-a59c-d289cf852378\") " pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.833966 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d105868-804c-47f7-a59c-d289cf852378-catalog-content\") pod \"redhat-marketplace-28ptj\" (UID: \"1d105868-804c-47f7-a59c-d289cf852378\") " pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.834950 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d105868-804c-47f7-a59c-d289cf852378-utilities\") pod \"redhat-marketplace-28ptj\" (UID: \"1d105868-804c-47f7-a59c-d289cf852378\") " pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.835976 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9zvx5"] Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.859159 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vgbd\" (UniqueName: \"kubernetes.io/projected/1d105868-804c-47f7-a59c-d289cf852378-kube-api-access-6vgbd\") pod \"redhat-marketplace-28ptj\" (UID: \"1d105868-804c-47f7-a59c-d289cf852378\") " pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.916449 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.934154 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7196d3d2-86be-4905-ba31-121f2e3e9c8a-utilities\") pod \"redhat-operators-9zvx5\" (UID: \"7196d3d2-86be-4905-ba31-121f2e3e9c8a\") " pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.934247 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbkck\" (UniqueName: \"kubernetes.io/projected/7196d3d2-86be-4905-ba31-121f2e3e9c8a-kube-api-access-bbkck\") pod \"redhat-operators-9zvx5\" (UID: \"7196d3d2-86be-4905-ba31-121f2e3e9c8a\") " pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:28 crc kubenswrapper[5034]: I0105 21:57:28.934281 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7196d3d2-86be-4905-ba31-121f2e3e9c8a-catalog-content\") pod \"redhat-operators-9zvx5\" (UID: \"7196d3d2-86be-4905-ba31-121f2e3e9c8a\") " pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.034936 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7196d3d2-86be-4905-ba31-121f2e3e9c8a-utilities\") pod \"redhat-operators-9zvx5\" (UID: \"7196d3d2-86be-4905-ba31-121f2e3e9c8a\") " pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.035026 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbkck\" (UniqueName: \"kubernetes.io/projected/7196d3d2-86be-4905-ba31-121f2e3e9c8a-kube-api-access-bbkck\") pod \"redhat-operators-9zvx5\" (UID: \"7196d3d2-86be-4905-ba31-121f2e3e9c8a\") " pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.035060 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7196d3d2-86be-4905-ba31-121f2e3e9c8a-catalog-content\") pod \"redhat-operators-9zvx5\" (UID: \"7196d3d2-86be-4905-ba31-121f2e3e9c8a\") " pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.035749 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7196d3d2-86be-4905-ba31-121f2e3e9c8a-catalog-content\") pod \"redhat-operators-9zvx5\" (UID: \"7196d3d2-86be-4905-ba31-121f2e3e9c8a\") " pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.035860 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7196d3d2-86be-4905-ba31-121f2e3e9c8a-utilities\") pod \"redhat-operators-9zvx5\" (UID: \"7196d3d2-86be-4905-ba31-121f2e3e9c8a\") " pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.069141 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbkck\" (UniqueName: \"kubernetes.io/projected/7196d3d2-86be-4905-ba31-121f2e3e9c8a-kube-api-access-bbkck\") pod \"redhat-operators-9zvx5\" (UID: \"7196d3d2-86be-4905-ba31-121f2e3e9c8a\") " pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.143265 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.310842 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28ptj"] Jan 05 21:57:29 crc kubenswrapper[5034]: W0105 21:57:29.318292 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d105868_804c_47f7_a59c_d289cf852378.slice/crio-23a6d6c0b913ec4d2baed6f88050e4190f96f4b95fbd2881994cc0c121042cbb WatchSource:0}: Error finding container 23a6d6c0b913ec4d2baed6f88050e4190f96f4b95fbd2881994cc0c121042cbb: Status 404 returned error can't find the container with id 23a6d6c0b913ec4d2baed6f88050e4190f96f4b95fbd2881994cc0c121042cbb Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.510885 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9zvx5"] Jan 05 21:57:29 crc kubenswrapper[5034]: W0105 21:57:29.515581 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7196d3d2_86be_4905_ba31_121f2e3e9c8a.slice/crio-c9e47f73db79007273826a86d48a8b48d8e65b5402ed7bb7a731ba0410a08234 WatchSource:0}: Error finding container c9e47f73db79007273826a86d48a8b48d8e65b5402ed7bb7a731ba0410a08234: Status 404 returned error can't find the container with id c9e47f73db79007273826a86d48a8b48d8e65b5402ed7bb7a731ba0410a08234 Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.783819 5034 generic.go:334] "Generic (PLEG): container finished" podID="7196d3d2-86be-4905-ba31-121f2e3e9c8a" containerID="949c4df3b64246b198a7b1a936ca1639804226eb7be645dfa7e53a4679a61a23" exitCode=0 Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.783929 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zvx5" event={"ID":"7196d3d2-86be-4905-ba31-121f2e3e9c8a","Type":"ContainerDied","Data":"949c4df3b64246b198a7b1a936ca1639804226eb7be645dfa7e53a4679a61a23"} Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.784381 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zvx5" event={"ID":"7196d3d2-86be-4905-ba31-121f2e3e9c8a","Type":"ContainerStarted","Data":"c9e47f73db79007273826a86d48a8b48d8e65b5402ed7bb7a731ba0410a08234"} Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.789783 5034 generic.go:334] "Generic (PLEG): container finished" podID="1d105868-804c-47f7-a59c-d289cf852378" containerID="adf40a804c6d2d8b63f4d8be61b35a0eff71ead70808982d8158bce1d45c8362" exitCode=0 Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.790127 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28ptj" event={"ID":"1d105868-804c-47f7-a59c-d289cf852378","Type":"ContainerDied","Data":"adf40a804c6d2d8b63f4d8be61b35a0eff71ead70808982d8158bce1d45c8362"} Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.790165 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28ptj" event={"ID":"1d105868-804c-47f7-a59c-d289cf852378","Type":"ContainerStarted","Data":"23a6d6c0b913ec4d2baed6f88050e4190f96f4b95fbd2881994cc0c121042cbb"} Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.845270 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58104f59-4ae4-4e18-aa6a-6762a589e921" path="/var/lib/kubelet/pods/58104f59-4ae4-4e18-aa6a-6762a589e921/volumes" Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.845885 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7" path="/var/lib/kubelet/pods/6d8c12ec-ccc2-4e78-8a00-0bc3b167dcd7/volumes" Jan 05 21:57:29 crc kubenswrapper[5034]: I0105 21:57:29.846645 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af1871ae-05fe-4597-8bb9-e2525f739922" path="/var/lib/kubelet/pods/af1871ae-05fe-4597-8bb9-e2525f739922/volumes" Jan 05 21:57:30 crc kubenswrapper[5034]: I0105 21:57:30.796915 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zvx5" event={"ID":"7196d3d2-86be-4905-ba31-121f2e3e9c8a","Type":"ContainerStarted","Data":"c5fbc6e7554f12852880eaea6066536ce79e906cecee9fd70512f4c9a0e46e3b"} Jan 05 21:57:30 crc kubenswrapper[5034]: I0105 21:57:30.799108 5034 generic.go:334] "Generic (PLEG): container finished" podID="1d105868-804c-47f7-a59c-d289cf852378" containerID="6f16fd878af479e751c5dd8262d19d3991f3630e38018065b003c4c526023620" exitCode=0 Jan 05 21:57:30 crc kubenswrapper[5034]: I0105 21:57:30.799153 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28ptj" event={"ID":"1d105868-804c-47f7-a59c-d289cf852378","Type":"ContainerDied","Data":"6f16fd878af479e751c5dd8262d19d3991f3630e38018065b003c4c526023620"} Jan 05 21:57:30 crc kubenswrapper[5034]: I0105 21:57:30.984172 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-775ks"] Jan 05 21:57:30 crc kubenswrapper[5034]: I0105 21:57:30.985255 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:30 crc kubenswrapper[5034]: I0105 21:57:30.988485 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 05 21:57:30 crc kubenswrapper[5034]: I0105 21:57:30.997844 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-775ks"] Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.157653 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c22fec-05cd-4506-a6e4-0508d9a3251a-catalog-content\") pod \"certified-operators-775ks\" (UID: \"43c22fec-05cd-4506-a6e4-0508d9a3251a\") " pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.157742 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c22fec-05cd-4506-a6e4-0508d9a3251a-utilities\") pod \"certified-operators-775ks\" (UID: \"43c22fec-05cd-4506-a6e4-0508d9a3251a\") " pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.157772 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sspb\" (UniqueName: \"kubernetes.io/projected/43c22fec-05cd-4506-a6e4-0508d9a3251a-kube-api-access-6sspb\") pod \"certified-operators-775ks\" (UID: \"43c22fec-05cd-4506-a6e4-0508d9a3251a\") " pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.195668 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rslnm"] Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.197401 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.199829 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.201458 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rslnm"] Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.259089 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sspb\" (UniqueName: \"kubernetes.io/projected/43c22fec-05cd-4506-a6e4-0508d9a3251a-kube-api-access-6sspb\") pod \"certified-operators-775ks\" (UID: \"43c22fec-05cd-4506-a6e4-0508d9a3251a\") " pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.259155 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c22fec-05cd-4506-a6e4-0508d9a3251a-catalog-content\") pod \"certified-operators-775ks\" (UID: \"43c22fec-05cd-4506-a6e4-0508d9a3251a\") " pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.259212 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c22fec-05cd-4506-a6e4-0508d9a3251a-utilities\") pod \"certified-operators-775ks\" (UID: \"43c22fec-05cd-4506-a6e4-0508d9a3251a\") " pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.259709 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c22fec-05cd-4506-a6e4-0508d9a3251a-utilities\") pod \"certified-operators-775ks\" (UID: \"43c22fec-05cd-4506-a6e4-0508d9a3251a\") " pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.260240 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c22fec-05cd-4506-a6e4-0508d9a3251a-catalog-content\") pod \"certified-operators-775ks\" (UID: \"43c22fec-05cd-4506-a6e4-0508d9a3251a\") " pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.279659 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sspb\" (UniqueName: \"kubernetes.io/projected/43c22fec-05cd-4506-a6e4-0508d9a3251a-kube-api-access-6sspb\") pod \"certified-operators-775ks\" (UID: \"43c22fec-05cd-4506-a6e4-0508d9a3251a\") " pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.306256 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.360778 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be6f642-7b7e-4b18-a3f6-184fca000d37-catalog-content\") pod \"community-operators-rslnm\" (UID: \"7be6f642-7b7e-4b18-a3f6-184fca000d37\") " pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.361165 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbz79\" (UniqueName: \"kubernetes.io/projected/7be6f642-7b7e-4b18-a3f6-184fca000d37-kube-api-access-nbz79\") pod \"community-operators-rslnm\" (UID: \"7be6f642-7b7e-4b18-a3f6-184fca000d37\") " pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.361227 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be6f642-7b7e-4b18-a3f6-184fca000d37-utilities\") pod \"community-operators-rslnm\" (UID: \"7be6f642-7b7e-4b18-a3f6-184fca000d37\") " pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.462868 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be6f642-7b7e-4b18-a3f6-184fca000d37-utilities\") pod \"community-operators-rslnm\" (UID: \"7be6f642-7b7e-4b18-a3f6-184fca000d37\") " pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.462963 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be6f642-7b7e-4b18-a3f6-184fca000d37-catalog-content\") pod \"community-operators-rslnm\" (UID: \"7be6f642-7b7e-4b18-a3f6-184fca000d37\") " pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.462994 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbz79\" (UniqueName: \"kubernetes.io/projected/7be6f642-7b7e-4b18-a3f6-184fca000d37-kube-api-access-nbz79\") pod \"community-operators-rslnm\" (UID: \"7be6f642-7b7e-4b18-a3f6-184fca000d37\") " pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.464425 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be6f642-7b7e-4b18-a3f6-184fca000d37-utilities\") pod \"community-operators-rslnm\" (UID: \"7be6f642-7b7e-4b18-a3f6-184fca000d37\") " pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.464482 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be6f642-7b7e-4b18-a3f6-184fca000d37-catalog-content\") pod \"community-operators-rslnm\" (UID: \"7be6f642-7b7e-4b18-a3f6-184fca000d37\") " pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.483016 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbz79\" (UniqueName: \"kubernetes.io/projected/7be6f642-7b7e-4b18-a3f6-184fca000d37-kube-api-access-nbz79\") pod \"community-operators-rslnm\" (UID: \"7be6f642-7b7e-4b18-a3f6-184fca000d37\") " pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.518153 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.714181 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-775ks"] Jan 05 21:57:31 crc kubenswrapper[5034]: W0105 21:57:31.728584 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c22fec_05cd_4506_a6e4_0508d9a3251a.slice/crio-40addeb7e7462374a68768a038733bb6d0978341cf5f21196177358d5bde582e WatchSource:0}: Error finding container 40addeb7e7462374a68768a038733bb6d0978341cf5f21196177358d5bde582e: Status 404 returned error can't find the container with id 40addeb7e7462374a68768a038733bb6d0978341cf5f21196177358d5bde582e Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.807018 5034 generic.go:334] "Generic (PLEG): container finished" podID="7196d3d2-86be-4905-ba31-121f2e3e9c8a" containerID="c5fbc6e7554f12852880eaea6066536ce79e906cecee9fd70512f4c9a0e46e3b" exitCode=0 Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.807072 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zvx5" event={"ID":"7196d3d2-86be-4905-ba31-121f2e3e9c8a","Type":"ContainerDied","Data":"c5fbc6e7554f12852880eaea6066536ce79e906cecee9fd70512f4c9a0e46e3b"} Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.811736 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-775ks" event={"ID":"43c22fec-05cd-4506-a6e4-0508d9a3251a","Type":"ContainerStarted","Data":"40addeb7e7462374a68768a038733bb6d0978341cf5f21196177358d5bde582e"} Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.815964 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28ptj" event={"ID":"1d105868-804c-47f7-a59c-d289cf852378","Type":"ContainerStarted","Data":"7ee064d5287426d7b92dd17b572b1c9e600935053ed9ade8fcb5e24fbc9945a3"} Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.844586 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-28ptj" podStartSLOduration=2.41227366 podStartE2EDuration="3.844570276s" podCreationTimestamp="2026-01-05 21:57:28 +0000 UTC" firstStartedPulling="2026-01-05 21:57:29.791586376 +0000 UTC m=+342.163585815" lastFinishedPulling="2026-01-05 21:57:31.223882992 +0000 UTC m=+343.595882431" observedRunningTime="2026-01-05 21:57:31.839022518 +0000 UTC m=+344.211021957" watchObservedRunningTime="2026-01-05 21:57:31.844570276 +0000 UTC m=+344.216569715" Jan 05 21:57:31 crc kubenswrapper[5034]: I0105 21:57:31.906727 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rslnm"] Jan 05 21:57:31 crc kubenswrapper[5034]: W0105 21:57:31.938611 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7be6f642_7b7e_4b18_a3f6_184fca000d37.slice/crio-1ec1434847029ea812b345d2184461f7c1e87b29e49f7e8ae1137b6d3cacfd77 WatchSource:0}: Error finding container 1ec1434847029ea812b345d2184461f7c1e87b29e49f7e8ae1137b6d3cacfd77: Status 404 returned error can't find the container with id 1ec1434847029ea812b345d2184461f7c1e87b29e49f7e8ae1137b6d3cacfd77 Jan 05 21:57:32 crc kubenswrapper[5034]: I0105 21:57:32.826321 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zvx5" event={"ID":"7196d3d2-86be-4905-ba31-121f2e3e9c8a","Type":"ContainerStarted","Data":"08e050b2d369502c5da0d90d196f41349f3444d5de5a676a5e464213181a4062"} Jan 05 21:57:32 crc kubenswrapper[5034]: I0105 21:57:32.829426 5034 generic.go:334] "Generic (PLEG): container finished" podID="43c22fec-05cd-4506-a6e4-0508d9a3251a" containerID="1fe0c6b933cd1d8a7faf0bf05c85ba1e78fa559602b38f0addd3d0dcec649549" exitCode=0 Jan 05 21:57:32 crc kubenswrapper[5034]: I0105 21:57:32.829495 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-775ks" event={"ID":"43c22fec-05cd-4506-a6e4-0508d9a3251a","Type":"ContainerDied","Data":"1fe0c6b933cd1d8a7faf0bf05c85ba1e78fa559602b38f0addd3d0dcec649549"} Jan 05 21:57:32 crc kubenswrapper[5034]: I0105 21:57:32.831127 5034 generic.go:334] "Generic (PLEG): container finished" podID="7be6f642-7b7e-4b18-a3f6-184fca000d37" containerID="89e1e2a28e4fbaf1f400b27e4751149128c679462fdabd7a1e055cb8bcdb155d" exitCode=0 Jan 05 21:57:32 crc kubenswrapper[5034]: I0105 21:57:32.831457 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rslnm" event={"ID":"7be6f642-7b7e-4b18-a3f6-184fca000d37","Type":"ContainerDied","Data":"89e1e2a28e4fbaf1f400b27e4751149128c679462fdabd7a1e055cb8bcdb155d"} Jan 05 21:57:32 crc kubenswrapper[5034]: I0105 21:57:32.831487 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rslnm" event={"ID":"7be6f642-7b7e-4b18-a3f6-184fca000d37","Type":"ContainerStarted","Data":"1ec1434847029ea812b345d2184461f7c1e87b29e49f7e8ae1137b6d3cacfd77"} Jan 05 21:57:32 crc kubenswrapper[5034]: I0105 21:57:32.849786 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9zvx5" podStartSLOduration=2.417663981 podStartE2EDuration="4.849770139s" podCreationTimestamp="2026-01-05 21:57:28 +0000 UTC" firstStartedPulling="2026-01-05 21:57:29.786684346 +0000 UTC m=+342.158683785" lastFinishedPulling="2026-01-05 21:57:32.218790504 +0000 UTC m=+344.590789943" observedRunningTime="2026-01-05 21:57:32.847565777 +0000 UTC m=+345.219565226" watchObservedRunningTime="2026-01-05 21:57:32.849770139 +0000 UTC m=+345.221769578" Jan 05 21:57:33 crc kubenswrapper[5034]: I0105 21:57:33.837039 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rslnm" event={"ID":"7be6f642-7b7e-4b18-a3f6-184fca000d37","Type":"ContainerStarted","Data":"13171c2ea70c6cb264e291efd713c8b41c4db8fc007858c3374cf9ec130f524e"} Jan 05 21:57:33 crc kubenswrapper[5034]: I0105 21:57:33.843950 5034 generic.go:334] "Generic (PLEG): container finished" podID="43c22fec-05cd-4506-a6e4-0508d9a3251a" containerID="92b3829f05156e418b3d5de749f8837003d6ceb97840f389dccb68d0963ac32d" exitCode=0 Jan 05 21:57:33 crc kubenswrapper[5034]: I0105 21:57:33.844522 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-775ks" event={"ID":"43c22fec-05cd-4506-a6e4-0508d9a3251a","Type":"ContainerDied","Data":"92b3829f05156e418b3d5de749f8837003d6ceb97840f389dccb68d0963ac32d"} Jan 05 21:57:34 crc kubenswrapper[5034]: I0105 21:57:34.852051 5034 generic.go:334] "Generic (PLEG): container finished" podID="7be6f642-7b7e-4b18-a3f6-184fca000d37" containerID="13171c2ea70c6cb264e291efd713c8b41c4db8fc007858c3374cf9ec130f524e" exitCode=0 Jan 05 21:57:34 crc kubenswrapper[5034]: I0105 21:57:34.852115 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rslnm" event={"ID":"7be6f642-7b7e-4b18-a3f6-184fca000d37","Type":"ContainerDied","Data":"13171c2ea70c6cb264e291efd713c8b41c4db8fc007858c3374cf9ec130f524e"} Jan 05 21:57:34 crc kubenswrapper[5034]: I0105 21:57:34.856661 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-775ks" event={"ID":"43c22fec-05cd-4506-a6e4-0508d9a3251a","Type":"ContainerStarted","Data":"d80d3f160014340b410a4cf43fbc763d36fcce9cc29e15295ee02b2f8118cbab"} Jan 05 21:57:34 crc kubenswrapper[5034]: I0105 21:57:34.900475 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-775ks" podStartSLOduration=3.453723998 podStartE2EDuration="4.900454334s" podCreationTimestamp="2026-01-05 21:57:30 +0000 UTC" firstStartedPulling="2026-01-05 21:57:32.830961195 +0000 UTC m=+345.202960634" lastFinishedPulling="2026-01-05 21:57:34.277691531 +0000 UTC m=+346.649690970" observedRunningTime="2026-01-05 21:57:34.897946113 +0000 UTC m=+347.269945552" watchObservedRunningTime="2026-01-05 21:57:34.900454334 +0000 UTC m=+347.272453773" Jan 05 21:57:35 crc kubenswrapper[5034]: I0105 21:57:35.863533 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rslnm" event={"ID":"7be6f642-7b7e-4b18-a3f6-184fca000d37","Type":"ContainerStarted","Data":"118ba8b1dfe059410361291028ea7e601523b2ead5700147cdbecb9b5443848d"} Jan 05 21:57:35 crc kubenswrapper[5034]: I0105 21:57:35.879356 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rslnm" podStartSLOduration=2.181792428 podStartE2EDuration="4.87932909s" podCreationTimestamp="2026-01-05 21:57:31 +0000 UTC" firstStartedPulling="2026-01-05 21:57:32.832340344 +0000 UTC m=+345.204339783" lastFinishedPulling="2026-01-05 21:57:35.529877006 +0000 UTC m=+347.901876445" observedRunningTime="2026-01-05 21:57:35.878659381 +0000 UTC m=+348.250658820" watchObservedRunningTime="2026-01-05 21:57:35.87932909 +0000 UTC m=+348.251328529" Jan 05 21:57:36 crc kubenswrapper[5034]: I0105 21:57:36.869038 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5qmt6"] Jan 05 21:57:36 crc kubenswrapper[5034]: I0105 21:57:36.870294 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:36 crc kubenswrapper[5034]: I0105 21:57:36.883770 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5qmt6"] Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.045584 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/676fc2db-a515-49c9-8811-50d878dcb9d8-registry-tls\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.045909 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/676fc2db-a515-49c9-8811-50d878dcb9d8-trusted-ca\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.045953 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/676fc2db-a515-49c9-8811-50d878dcb9d8-registry-certificates\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.045986 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.046007 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/676fc2db-a515-49c9-8811-50d878dcb9d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.046045 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/676fc2db-a515-49c9-8811-50d878dcb9d8-bound-sa-token\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.046123 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/676fc2db-a515-49c9-8811-50d878dcb9d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.046144 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4p8\" (UniqueName: \"kubernetes.io/projected/676fc2db-a515-49c9-8811-50d878dcb9d8-kube-api-access-7s4p8\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.091584 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.147444 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/676fc2db-a515-49c9-8811-50d878dcb9d8-registry-certificates\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.147500 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/676fc2db-a515-49c9-8811-50d878dcb9d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.147520 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/676fc2db-a515-49c9-8811-50d878dcb9d8-bound-sa-token\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.147554 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/676fc2db-a515-49c9-8811-50d878dcb9d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.147573 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s4p8\" (UniqueName: \"kubernetes.io/projected/676fc2db-a515-49c9-8811-50d878dcb9d8-kube-api-access-7s4p8\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.147615 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/676fc2db-a515-49c9-8811-50d878dcb9d8-registry-tls\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.147630 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/676fc2db-a515-49c9-8811-50d878dcb9d8-trusted-ca\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.148892 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/676fc2db-a515-49c9-8811-50d878dcb9d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.149501 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/676fc2db-a515-49c9-8811-50d878dcb9d8-trusted-ca\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.149821 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/676fc2db-a515-49c9-8811-50d878dcb9d8-registry-certificates\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.159647 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/676fc2db-a515-49c9-8811-50d878dcb9d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.159652 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/676fc2db-a515-49c9-8811-50d878dcb9d8-registry-tls\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.166384 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/676fc2db-a515-49c9-8811-50d878dcb9d8-bound-sa-token\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.170704 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s4p8\" (UniqueName: \"kubernetes.io/projected/676fc2db-a515-49c9-8811-50d878dcb9d8-kube-api-access-7s4p8\") pod \"image-registry-66df7c8f76-5qmt6\" (UID: \"676fc2db-a515-49c9-8811-50d878dcb9d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.391100 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.826001 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5qmt6"] Jan 05 21:57:37 crc kubenswrapper[5034]: I0105 21:57:37.877055 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" event={"ID":"676fc2db-a515-49c9-8811-50d878dcb9d8","Type":"ContainerStarted","Data":"fbb00d0ac67c62e8e56296ed0cf97802bb2672ced361cb9ae995516852e8c241"} Jan 05 21:57:38 crc kubenswrapper[5034]: I0105 21:57:38.883929 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" event={"ID":"676fc2db-a515-49c9-8811-50d878dcb9d8","Type":"ContainerStarted","Data":"7b7ee0f8b63262fba422aac780290bcaacef1a39b47eb8177e7c35012f8dfb6b"} Jan 05 21:57:38 crc kubenswrapper[5034]: I0105 21:57:38.884479 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:38 crc kubenswrapper[5034]: I0105 21:57:38.900945 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" podStartSLOduration=2.900923884 podStartE2EDuration="2.900923884s" podCreationTimestamp="2026-01-05 21:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:57:38.898160135 +0000 UTC m=+351.270159574" watchObservedRunningTime="2026-01-05 21:57:38.900923884 +0000 UTC m=+351.272923323" Jan 05 21:57:38 crc kubenswrapper[5034]: I0105 21:57:38.917850 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:38 crc kubenswrapper[5034]: I0105 21:57:38.917929 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:38 crc kubenswrapper[5034]: I0105 21:57:38.955465 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:39 crc kubenswrapper[5034]: I0105 21:57:39.144373 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:39 crc kubenswrapper[5034]: I0105 21:57:39.144447 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:39 crc kubenswrapper[5034]: I0105 21:57:39.179977 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:39 crc kubenswrapper[5034]: I0105 21:57:39.926996 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9zvx5" Jan 05 21:57:39 crc kubenswrapper[5034]: I0105 21:57:39.930382 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-28ptj" Jan 05 21:57:41 crc kubenswrapper[5034]: I0105 21:57:41.307155 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:41 crc kubenswrapper[5034]: I0105 21:57:41.307956 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:41 crc kubenswrapper[5034]: I0105 21:57:41.350655 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:41 crc kubenswrapper[5034]: I0105 21:57:41.518911 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:41 crc kubenswrapper[5034]: I0105 21:57:41.518973 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:41 crc kubenswrapper[5034]: I0105 21:57:41.559854 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:41 crc kubenswrapper[5034]: I0105 21:57:41.940956 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-775ks" Jan 05 21:57:41 crc kubenswrapper[5034]: I0105 21:57:41.960040 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rslnm" Jan 05 21:57:50 crc kubenswrapper[5034]: I0105 21:57:50.469121 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:57:50 crc kubenswrapper[5034]: I0105 21:57:50.469598 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:57:57 crc kubenswrapper[5034]: I0105 21:57:57.395574 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5qmt6" Jan 05 21:57:57 crc kubenswrapper[5034]: I0105 21:57:57.452637 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nstll"] Jan 05 21:58:20 crc kubenswrapper[5034]: I0105 21:58:20.468554 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:58:20 crc kubenswrapper[5034]: I0105 21:58:20.470018 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.497604 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" podUID="5367619c-e54b-4d73-9c9e-cf73bbe8dbed" containerName="registry" containerID="cri-o://205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823" gracePeriod=30 Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.844730 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.973700 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.973799 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-ca-trust-extracted\") pod \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.973828 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-installation-pull-secrets\") pod \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.973862 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-trusted-ca\") pod \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.973885 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-tls\") pod \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.973910 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-bound-sa-token\") pod \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.974890 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5367619c-e54b-4d73-9c9e-cf73bbe8dbed" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.974999 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5367619c-e54b-4d73-9c9e-cf73bbe8dbed" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.975125 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-certificates\") pod \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.975186 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qll8h\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-kube-api-access-qll8h\") pod \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\" (UID: \"5367619c-e54b-4d73-9c9e-cf73bbe8dbed\") " Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.975378 5034 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.975397 5034 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.979597 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5367619c-e54b-4d73-9c9e-cf73bbe8dbed" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.986513 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5367619c-e54b-4d73-9c9e-cf73bbe8dbed" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.986699 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-kube-api-access-qll8h" (OuterVolumeSpecName: "kube-api-access-qll8h") pod "5367619c-e54b-4d73-9c9e-cf73bbe8dbed" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed"). InnerVolumeSpecName "kube-api-access-qll8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.986883 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5367619c-e54b-4d73-9c9e-cf73bbe8dbed" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.987126 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5367619c-e54b-4d73-9c9e-cf73bbe8dbed" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:58:22 crc kubenswrapper[5034]: I0105 21:58:22.991530 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5367619c-e54b-4d73-9c9e-cf73bbe8dbed" (UID: "5367619c-e54b-4d73-9c9e-cf73bbe8dbed"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.076627 5034 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.076684 5034 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.076700 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qll8h\" (UniqueName: \"kubernetes.io/projected/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-kube-api-access-qll8h\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.076718 5034 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.076732 5034 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5367619c-e54b-4d73-9c9e-cf73bbe8dbed-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.156243 5034 generic.go:334] "Generic (PLEG): container finished" podID="5367619c-e54b-4d73-9c9e-cf73bbe8dbed" containerID="205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823" exitCode=0 Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.156293 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" event={"ID":"5367619c-e54b-4d73-9c9e-cf73bbe8dbed","Type":"ContainerDied","Data":"205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823"} Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.156321 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" event={"ID":"5367619c-e54b-4d73-9c9e-cf73bbe8dbed","Type":"ContainerDied","Data":"8dcf67bed10171a7b7fa2fbca9cbc28d4e68cf31a770c5fc2d81ee7f14530f63"} Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.156325 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nstll" Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.156342 5034 scope.go:117] "RemoveContainer" containerID="205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823" Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.170225 5034 scope.go:117] "RemoveContainer" containerID="205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823" Jan 05 21:58:23 crc kubenswrapper[5034]: E0105 21:58:23.170633 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823\": container with ID starting with 205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823 not found: ID does not exist" containerID="205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823" Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.170671 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823"} err="failed to get container status \"205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823\": rpc error: code = NotFound desc = could not find container \"205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823\": container with ID starting with 205dcad17f8bbe30909134b1ca3b3df40b9eb0cf84dfbdc4a26fd4fa2caa9823 not found: ID does not exist" Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.209476 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nstll"] Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.219944 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nstll"] Jan 05 21:58:23 crc kubenswrapper[5034]: I0105 21:58:23.858454 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5367619c-e54b-4d73-9c9e-cf73bbe8dbed" path="/var/lib/kubelet/pods/5367619c-e54b-4d73-9c9e-cf73bbe8dbed/volumes" Jan 05 21:58:50 crc kubenswrapper[5034]: I0105 21:58:50.469241 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:58:50 crc kubenswrapper[5034]: I0105 21:58:50.469721 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:58:50 crc kubenswrapper[5034]: I0105 21:58:50.469793 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 21:58:50 crc kubenswrapper[5034]: I0105 21:58:50.470343 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40d2eb8f27d98116792dd8c5580bff63065eb020012694353293a0840e15892d"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:58:50 crc kubenswrapper[5034]: I0105 21:58:50.470395 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://40d2eb8f27d98116792dd8c5580bff63065eb020012694353293a0840e15892d" gracePeriod=600 Jan 05 21:58:51 crc kubenswrapper[5034]: I0105 21:58:51.323137 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="40d2eb8f27d98116792dd8c5580bff63065eb020012694353293a0840e15892d" exitCode=0 Jan 05 21:58:51 crc kubenswrapper[5034]: I0105 21:58:51.323300 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"40d2eb8f27d98116792dd8c5580bff63065eb020012694353293a0840e15892d"} Jan 05 21:58:51 crc kubenswrapper[5034]: I0105 21:58:51.323437 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"c36642d80caae31121949484a500455715981b05431b5b90c4a54cf3db825275"} Jan 05 21:58:51 crc kubenswrapper[5034]: I0105 21:58:51.323471 5034 scope.go:117] "RemoveContainer" containerID="dca5f98c84b4e6708a35d2340193c12c0f00763b2daf5ae5d0d76a8686d564c2" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.162443 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl"] Jan 05 22:00:00 crc kubenswrapper[5034]: E0105 22:00:00.164192 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5367619c-e54b-4d73-9c9e-cf73bbe8dbed" containerName="registry" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.164211 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5367619c-e54b-4d73-9c9e-cf73bbe8dbed" containerName="registry" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.164329 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5367619c-e54b-4d73-9c9e-cf73bbe8dbed" containerName="registry" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.164748 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.166499 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.166690 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.171820 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl"] Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.221041 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-config-volume\") pod \"collect-profiles-29460840-mh2gl\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.221143 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-secret-volume\") pod \"collect-profiles-29460840-mh2gl\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.221175 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g9qv\" (UniqueName: \"kubernetes.io/projected/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-kube-api-access-6g9qv\") pod \"collect-profiles-29460840-mh2gl\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.321953 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-config-volume\") pod \"collect-profiles-29460840-mh2gl\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.322010 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-secret-volume\") pod \"collect-profiles-29460840-mh2gl\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.322040 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g9qv\" (UniqueName: \"kubernetes.io/projected/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-kube-api-access-6g9qv\") pod \"collect-profiles-29460840-mh2gl\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.322845 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-config-volume\") pod \"collect-profiles-29460840-mh2gl\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.328280 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-secret-volume\") pod \"collect-profiles-29460840-mh2gl\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.339411 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g9qv\" (UniqueName: \"kubernetes.io/projected/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-kube-api-access-6g9qv\") pod \"collect-profiles-29460840-mh2gl\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.531884 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:00 crc kubenswrapper[5034]: I0105 22:00:00.718806 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl"] Jan 05 22:00:01 crc kubenswrapper[5034]: I0105 22:00:01.677360 5034 generic.go:334] "Generic (PLEG): container finished" podID="fbd202ab-05ca-45d8-a0b5-bb0629ea5a75" containerID="752f4fb4a92ec2e4cdc6d8de06539066891d0741f2f2585f90bff243259a7f4b" exitCode=0 Jan 05 22:00:01 crc kubenswrapper[5034]: I0105 22:00:01.677550 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" event={"ID":"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75","Type":"ContainerDied","Data":"752f4fb4a92ec2e4cdc6d8de06539066891d0741f2f2585f90bff243259a7f4b"} Jan 05 22:00:01 crc kubenswrapper[5034]: I0105 22:00:01.677638 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" event={"ID":"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75","Type":"ContainerStarted","Data":"389ad4b788ec907e36f9a3e89bee34c4a30ff8c18e5a5c9e378bb604655ff405"} Jan 05 22:00:02 crc kubenswrapper[5034]: I0105 22:00:02.876542 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:02 crc kubenswrapper[5034]: I0105 22:00:02.956428 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-secret-volume\") pod \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " Jan 05 22:00:02 crc kubenswrapper[5034]: I0105 22:00:02.956527 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g9qv\" (UniqueName: \"kubernetes.io/projected/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-kube-api-access-6g9qv\") pod \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " Jan 05 22:00:02 crc kubenswrapper[5034]: I0105 22:00:02.956584 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-config-volume\") pod \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\" (UID: \"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75\") " Jan 05 22:00:02 crc kubenswrapper[5034]: I0105 22:00:02.957291 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-config-volume" (OuterVolumeSpecName: "config-volume") pod "fbd202ab-05ca-45d8-a0b5-bb0629ea5a75" (UID: "fbd202ab-05ca-45d8-a0b5-bb0629ea5a75"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:00:02 crc kubenswrapper[5034]: I0105 22:00:02.962481 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-kube-api-access-6g9qv" (OuterVolumeSpecName: "kube-api-access-6g9qv") pod "fbd202ab-05ca-45d8-a0b5-bb0629ea5a75" (UID: "fbd202ab-05ca-45d8-a0b5-bb0629ea5a75"). InnerVolumeSpecName "kube-api-access-6g9qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:00:02 crc kubenswrapper[5034]: I0105 22:00:02.962636 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fbd202ab-05ca-45d8-a0b5-bb0629ea5a75" (UID: "fbd202ab-05ca-45d8-a0b5-bb0629ea5a75"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:00:03 crc kubenswrapper[5034]: I0105 22:00:03.058577 5034 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:03 crc kubenswrapper[5034]: I0105 22:00:03.058627 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g9qv\" (UniqueName: \"kubernetes.io/projected/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-kube-api-access-6g9qv\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:03 crc kubenswrapper[5034]: I0105 22:00:03.058644 5034 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:03 crc kubenswrapper[5034]: I0105 22:00:03.689688 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" event={"ID":"fbd202ab-05ca-45d8-a0b5-bb0629ea5a75","Type":"ContainerDied","Data":"389ad4b788ec907e36f9a3e89bee34c4a30ff8c18e5a5c9e378bb604655ff405"} Jan 05 22:00:03 crc kubenswrapper[5034]: I0105 22:00:03.690015 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="389ad4b788ec907e36f9a3e89bee34c4a30ff8c18e5a5c9e378bb604655ff405" Jan 05 22:00:03 crc kubenswrapper[5034]: I0105 22:00:03.689779 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl" Jan 05 22:00:50 crc kubenswrapper[5034]: I0105 22:00:50.469826 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:00:50 crc kubenswrapper[5034]: I0105 22:00:50.470734 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:01:20 crc kubenswrapper[5034]: I0105 22:01:20.468889 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:01:20 crc kubenswrapper[5034]: I0105 22:01:20.470395 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:01:50 crc kubenswrapper[5034]: I0105 22:01:50.469294 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:01:50 crc kubenswrapper[5034]: I0105 22:01:50.469813 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:01:50 crc kubenswrapper[5034]: I0105 22:01:50.469861 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:01:50 crc kubenswrapper[5034]: I0105 22:01:50.470421 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c36642d80caae31121949484a500455715981b05431b5b90c4a54cf3db825275"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:01:50 crc kubenswrapper[5034]: I0105 22:01:50.470476 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://c36642d80caae31121949484a500455715981b05431b5b90c4a54cf3db825275" gracePeriod=600 Jan 05 22:01:51 crc kubenswrapper[5034]: I0105 22:01:51.187955 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="c36642d80caae31121949484a500455715981b05431b5b90c4a54cf3db825275" exitCode=0 Jan 05 22:01:51 crc kubenswrapper[5034]: I0105 22:01:51.188019 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"c36642d80caae31121949484a500455715981b05431b5b90c4a54cf3db825275"} Jan 05 22:01:51 crc kubenswrapper[5034]: I0105 22:01:51.188546 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"88445724b2a970c08e5f2c6402ea7e57704a8e3d0fb29457d3eb885ad064167b"} Jan 05 22:01:51 crc kubenswrapper[5034]: I0105 22:01:51.188566 5034 scope.go:117] "RemoveContainer" containerID="40d2eb8f27d98116792dd8c5580bff63065eb020012694353293a0840e15892d" Jan 05 22:03:50 crc kubenswrapper[5034]: I0105 22:03:50.469032 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:03:50 crc kubenswrapper[5034]: I0105 22:03:50.469603 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:04:19 crc kubenswrapper[5034]: I0105 22:04:19.636133 5034 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 05 22:04:20 crc kubenswrapper[5034]: I0105 22:04:20.469220 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:04:20 crc kubenswrapper[5034]: I0105 22:04:20.470185 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.687825 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6fmfz"] Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.689841 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovn-controller" containerID="cri-o://b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db" gracePeriod=30 Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.689915 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="northd" containerID="cri-o://3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288" gracePeriod=30 Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.689977 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="kube-rbac-proxy-node" containerID="cri-o://527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4" gracePeriod=30 Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.689931 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="sbdb" containerID="cri-o://c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9" gracePeriod=30 Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.690061 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f" gracePeriod=30 Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.690115 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovn-acl-logging" containerID="cri-o://7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6" gracePeriod=30 Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.689999 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="nbdb" containerID="cri-o://9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032" gracePeriod=30 Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.728099 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" containerID="cri-o://6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380" gracePeriod=30 Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.969766 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/3.log" Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.972127 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovn-acl-logging/0.log" Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.972639 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovn-controller/0.log" Jan 05 22:04:27 crc kubenswrapper[5034]: I0105 22:04:27.972983 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.026880 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ll5rd"] Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027148 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027186 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027197 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="nbdb" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027202 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="nbdb" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027211 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="kube-rbac-proxy-node" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027217 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="kube-rbac-proxy-node" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027225 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd202ab-05ca-45d8-a0b5-bb0629ea5a75" containerName="collect-profiles" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027231 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd202ab-05ca-45d8-a0b5-bb0629ea5a75" containerName="collect-profiles" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027239 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="kube-rbac-proxy-ovn-metrics" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027267 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="kube-rbac-proxy-ovn-metrics" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027277 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027282 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027289 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="kubecfg-setup" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027294 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="kubecfg-setup" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027314 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="sbdb" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027319 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="sbdb" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027348 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovn-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027354 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovn-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027365 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="northd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027370 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="northd" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027377 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027382 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027391 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovn-acl-logging" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027397 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovn-acl-logging" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027405 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027411 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027512 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovn-acl-logging" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027524 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd202ab-05ca-45d8-a0b5-bb0629ea5a75" containerName="collect-profiles" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027530 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027541 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovn-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027549 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="kube-rbac-proxy-ovn-metrics" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027556 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="nbdb" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027569 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027576 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027582 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="kube-rbac-proxy-node" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027589 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="sbdb" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027595 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="northd" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.027686 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027693 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027784 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.027795 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerName="ovnkube-controller" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.030634 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.080652 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tsch6_691cc76e-ed89-4547-9bb1-58b03c8f7932/kube-multus/2.log" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.081667 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tsch6_691cc76e-ed89-4547-9bb1-58b03c8f7932/kube-multus/1.log" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.081729 5034 generic.go:334] "Generic (PLEG): container finished" podID="691cc76e-ed89-4547-9bb1-58b03c8f7932" containerID="7d8d3280f5d4e9e2ad1d86c2f4531a86cb70ed40c439093604147b08ca3aae00" exitCode=2 Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.081804 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tsch6" event={"ID":"691cc76e-ed89-4547-9bb1-58b03c8f7932","Type":"ContainerDied","Data":"7d8d3280f5d4e9e2ad1d86c2f4531a86cb70ed40c439093604147b08ca3aae00"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.081847 5034 scope.go:117] "RemoveContainer" containerID="5bee2a82415261125307526eb569b37e859fa88f3985179fd3bb09d55478942a" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.082316 5034 scope.go:117] "RemoveContainer" containerID="7d8d3280f5d4e9e2ad1d86c2f4531a86cb70ed40c439093604147b08ca3aae00" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.086548 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovnkube-controller/3.log" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.088335 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovn-acl-logging/0.log" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.090906 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fmfz_788e0f44-29c3-4c4a-afe9-33c26a965d74/ovn-controller/0.log" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.091412 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380" exitCode=0 Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.091497 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9" exitCode=0 Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.091553 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032" exitCode=0 Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.091602 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288" exitCode=0 Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.091652 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f" exitCode=0 Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.091699 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4" exitCode=0 Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.091752 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6" exitCode=143 Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.091803 5034 generic.go:334] "Generic (PLEG): container finished" podID="788e0f44-29c3-4c4a-afe9-33c26a965d74" containerID="b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db" exitCode=143 Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.091864 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.091972 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092036 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092137 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092224 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092293 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092375 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092455 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092505 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092571 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092624 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092669 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092718 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092766 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092814 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092865 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092918 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.092977 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093028 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093101 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093165 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093212 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093261 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093321 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093392 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093468 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093605 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093670 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093748 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093797 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093863 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093912 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093963 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094007 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094085 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094148 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094198 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094247 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094302 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" event={"ID":"788e0f44-29c3-4c4a-afe9-33c26a965d74","Type":"ContainerDied","Data":"a23c07583da8fc9983e5ecda000bce5feb53f433e482d48f0c480004ca259d7b"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094359 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094410 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094479 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094528 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094580 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094627 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.094675 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.095242 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.095294 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.095343 5034 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a"} Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.093295 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6fmfz" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.123689 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-ovn\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.123746 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v7pr\" (UniqueName: \"kubernetes.io/projected/788e0f44-29c3-4c4a-afe9-33c26a965d74-kube-api-access-4v7pr\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.123776 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-var-lib-cni-networks-ovn-kubernetes\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.123800 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.123811 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovn-node-metrics-cert\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.123881 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-env-overrides\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.123926 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-script-lib\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.123944 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-node-log\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.123961 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-log-socket\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.123980 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-netd\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124003 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-etc-openvswitch\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124018 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-config\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124031 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-slash\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124046 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-kubelet\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124181 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-openvswitch\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124204 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-systemd-units\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124234 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-var-lib-openvswitch\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124252 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-ovn-kubernetes\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124266 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-bin\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124295 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-netns\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124311 5034 scope.go:117] "RemoveContainer" containerID="6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124392 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-slash" (OuterVolumeSpecName: "host-slash") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124317 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-systemd\") pod \"788e0f44-29c3-4c4a-afe9-33c26a965d74\" (UID: \"788e0f44-29c3-4c4a-afe9-33c26a965d74\") " Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124465 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124528 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124693 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124715 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124769 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124809 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124809 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-log-socket" (OuterVolumeSpecName: "log-socket") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124816 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124829 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124827 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124860 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-node-log" (OuterVolumeSpecName: "node-log") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124845 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124816 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-run-systemd\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124912 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-log-socket\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124922 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124944 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9c5399b-f58b-45e7-bc21-8e806513eecc-ovn-node-metrics-cert\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124978 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-systemd-units\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.124997 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-etc-openvswitch\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125100 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-run-ovn\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125164 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzwc9\" (UniqueName: \"kubernetes.io/projected/a9c5399b-f58b-45e7-bc21-8e806513eecc-kube-api-access-mzwc9\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125178 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125197 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125201 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125241 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9c5399b-f58b-45e7-bc21-8e806513eecc-ovnkube-script-lib\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125274 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-run-ovn-kubernetes\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125295 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9c5399b-f58b-45e7-bc21-8e806513eecc-ovnkube-config\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125334 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-cni-netd\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125388 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-kubelet\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125419 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9c5399b-f58b-45e7-bc21-8e806513eecc-env-overrides\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125444 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-slash\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125505 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-run-openvswitch\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125578 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-run-netns\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125620 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-node-log\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125662 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-var-lib-openvswitch\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125686 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-cni-bin\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125747 5034 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125767 5034 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-node-log\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125780 5034 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-log-socket\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125790 5034 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125802 5034 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125813 5034 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125824 5034 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-slash\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125848 5034 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125888 5034 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125913 5034 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125926 5034 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125936 5034 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125946 5034 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125956 5034 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125967 5034 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125977 5034 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.125988 5034 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/788e0f44-29c3-4c4a-afe9-33c26a965d74-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.129529 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.136030 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/788e0f44-29c3-4c4a-afe9-33c26a965d74-kube-api-access-4v7pr" (OuterVolumeSpecName: "kube-api-access-4v7pr") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "kube-api-access-4v7pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.137113 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "788e0f44-29c3-4c4a-afe9-33c26a965d74" (UID: "788e0f44-29c3-4c4a-afe9-33c26a965d74"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.145788 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.161231 5034 scope.go:117] "RemoveContainer" containerID="c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.173271 5034 scope.go:117] "RemoveContainer" containerID="9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.188633 5034 scope.go:117] "RemoveContainer" containerID="3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.206256 5034 scope.go:117] "RemoveContainer" containerID="45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.220529 5034 scope.go:117] "RemoveContainer" containerID="527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227432 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-slash\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227477 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-run-openvswitch\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227535 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-run-netns\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227555 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-node-log\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227570 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-run-openvswitch\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227585 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-var-lib-openvswitch\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227587 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-slash\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227617 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-run-netns\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227619 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-var-lib-openvswitch\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227650 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-node-log\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227711 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-cni-bin\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227739 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-cni-bin\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227794 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-run-systemd\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227810 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-log-socket\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227841 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9c5399b-f58b-45e7-bc21-8e806513eecc-ovn-node-metrics-cert\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227854 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-run-systemd\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227858 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-systemd-units\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227877 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-systemd-units\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227898 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-etc-openvswitch\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227898 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-log-socket\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227929 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-run-ovn\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227981 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzwc9\" (UniqueName: \"kubernetes.io/projected/a9c5399b-f58b-45e7-bc21-8e806513eecc-kube-api-access-mzwc9\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.227998 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228033 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9c5399b-f58b-45e7-bc21-8e806513eecc-ovnkube-script-lib\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228057 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-run-ovn-kubernetes\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228120 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9c5399b-f58b-45e7-bc21-8e806513eecc-ovnkube-config\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228139 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-cni-netd\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228183 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-kubelet\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228206 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9c5399b-f58b-45e7-bc21-8e806513eecc-env-overrides\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228261 5034 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/788e0f44-29c3-4c4a-afe9-33c26a965d74-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228271 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v7pr\" (UniqueName: \"kubernetes.io/projected/788e0f44-29c3-4c4a-afe9-33c26a965d74-kube-api-access-4v7pr\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228280 5034 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/788e0f44-29c3-4c4a-afe9-33c26a965d74-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228536 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-etc-openvswitch\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228598 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-run-ovn\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228677 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-run-ovn-kubernetes\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228714 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228825 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9c5399b-f58b-45e7-bc21-8e806513eecc-env-overrides\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228957 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-cni-netd\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.228998 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9c5399b-f58b-45e7-bc21-8e806513eecc-host-kubelet\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.229287 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9c5399b-f58b-45e7-bc21-8e806513eecc-ovnkube-script-lib\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.229338 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9c5399b-f58b-45e7-bc21-8e806513eecc-ovnkube-config\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.231726 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9c5399b-f58b-45e7-bc21-8e806513eecc-ovn-node-metrics-cert\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.234172 5034 scope.go:117] "RemoveContainer" containerID="7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.246122 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzwc9\" (UniqueName: \"kubernetes.io/projected/a9c5399b-f58b-45e7-bc21-8e806513eecc-kube-api-access-mzwc9\") pod \"ovnkube-node-ll5rd\" (UID: \"a9c5399b-f58b-45e7-bc21-8e806513eecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.250464 5034 scope.go:117] "RemoveContainer" containerID="b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.265614 5034 scope.go:117] "RemoveContainer" containerID="0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.277375 5034 scope.go:117] "RemoveContainer" containerID="6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.277722 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380\": container with ID starting with 6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380 not found: ID does not exist" containerID="6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.277756 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380"} err="failed to get container status \"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380\": rpc error: code = NotFound desc = could not find container \"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380\": container with ID starting with 6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.277786 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.278098 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\": container with ID starting with 409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483 not found: ID does not exist" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.278150 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483"} err="failed to get container status \"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\": rpc error: code = NotFound desc = could not find container \"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\": container with ID starting with 409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.278181 5034 scope.go:117] "RemoveContainer" containerID="c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.278484 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\": container with ID starting with c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9 not found: ID does not exist" containerID="c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.278522 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9"} err="failed to get container status \"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\": rpc error: code = NotFound desc = could not find container \"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\": container with ID starting with c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.278547 5034 scope.go:117] "RemoveContainer" containerID="9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.278750 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\": container with ID starting with 9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032 not found: ID does not exist" containerID="9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.278776 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032"} err="failed to get container status \"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\": rpc error: code = NotFound desc = could not find container \"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\": container with ID starting with 9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.278788 5034 scope.go:117] "RemoveContainer" containerID="3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.278958 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\": container with ID starting with 3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288 not found: ID does not exist" containerID="3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.278987 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288"} err="failed to get container status \"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\": rpc error: code = NotFound desc = could not find container \"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\": container with ID starting with 3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.278999 5034 scope.go:117] "RemoveContainer" containerID="45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.279193 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\": container with ID starting with 45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f not found: ID does not exist" containerID="45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.279210 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f"} err="failed to get container status \"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\": rpc error: code = NotFound desc = could not find container \"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\": container with ID starting with 45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.279221 5034 scope.go:117] "RemoveContainer" containerID="527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.279377 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\": container with ID starting with 527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4 not found: ID does not exist" containerID="527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.279393 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4"} err="failed to get container status \"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\": rpc error: code = NotFound desc = could not find container \"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\": container with ID starting with 527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.279403 5034 scope.go:117] "RemoveContainer" containerID="7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.279610 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\": container with ID starting with 7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6 not found: ID does not exist" containerID="7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.279626 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6"} err="failed to get container status \"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\": rpc error: code = NotFound desc = could not find container \"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\": container with ID starting with 7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.279638 5034 scope.go:117] "RemoveContainer" containerID="b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.279845 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\": container with ID starting with b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db not found: ID does not exist" containerID="b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.279861 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db"} err="failed to get container status \"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\": rpc error: code = NotFound desc = could not find container \"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\": container with ID starting with b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.279890 5034 scope.go:117] "RemoveContainer" containerID="0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a" Jan 05 22:04:28 crc kubenswrapper[5034]: E0105 22:04:28.280116 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\": container with ID starting with 0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a not found: ID does not exist" containerID="0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.280132 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a"} err="failed to get container status \"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\": rpc error: code = NotFound desc = could not find container \"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\": container with ID starting with 0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.280145 5034 scope.go:117] "RemoveContainer" containerID="6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.280375 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380"} err="failed to get container status \"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380\": rpc error: code = NotFound desc = could not find container \"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380\": container with ID starting with 6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.280416 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.280861 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483"} err="failed to get container status \"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\": rpc error: code = NotFound desc = could not find container \"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\": container with ID starting with 409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.280897 5034 scope.go:117] "RemoveContainer" containerID="c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.281222 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9"} err="failed to get container status \"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\": rpc error: code = NotFound desc = could not find container \"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\": container with ID starting with c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.281253 5034 scope.go:117] "RemoveContainer" containerID="9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.281634 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032"} err="failed to get container status \"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\": rpc error: code = NotFound desc = could not find container \"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\": container with ID starting with 9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.281673 5034 scope.go:117] "RemoveContainer" containerID="3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.282028 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288"} err="failed to get container status \"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\": rpc error: code = NotFound desc = could not find container \"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\": container with ID starting with 3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.282052 5034 scope.go:117] "RemoveContainer" containerID="45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.282284 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f"} err="failed to get container status \"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\": rpc error: code = NotFound desc = could not find container \"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\": container with ID starting with 45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.282303 5034 scope.go:117] "RemoveContainer" containerID="527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.282479 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4"} err="failed to get container status \"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\": rpc error: code = NotFound desc = could not find container \"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\": container with ID starting with 527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.282503 5034 scope.go:117] "RemoveContainer" containerID="7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.282713 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6"} err="failed to get container status \"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\": rpc error: code = NotFound desc = could not find container \"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\": container with ID starting with 7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.282729 5034 scope.go:117] "RemoveContainer" containerID="b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.282930 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db"} err="failed to get container status \"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\": rpc error: code = NotFound desc = could not find container \"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\": container with ID starting with b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.282945 5034 scope.go:117] "RemoveContainer" containerID="0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.283205 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a"} err="failed to get container status \"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\": rpc error: code = NotFound desc = could not find container \"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\": container with ID starting with 0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.283222 5034 scope.go:117] "RemoveContainer" containerID="6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.283431 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380"} err="failed to get container status \"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380\": rpc error: code = NotFound desc = could not find container \"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380\": container with ID starting with 6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.283452 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.283659 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483"} err="failed to get container status \"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\": rpc error: code = NotFound desc = could not find container \"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\": container with ID starting with 409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.283678 5034 scope.go:117] "RemoveContainer" containerID="c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.283926 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9"} err="failed to get container status \"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\": rpc error: code = NotFound desc = could not find container \"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\": container with ID starting with c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.283940 5034 scope.go:117] "RemoveContainer" containerID="9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.284142 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032"} err="failed to get container status \"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\": rpc error: code = NotFound desc = could not find container \"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\": container with ID starting with 9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.284161 5034 scope.go:117] "RemoveContainer" containerID="3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.284353 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288"} err="failed to get container status \"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\": rpc error: code = NotFound desc = could not find container \"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\": container with ID starting with 3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.284373 5034 scope.go:117] "RemoveContainer" containerID="45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.284581 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f"} err="failed to get container status \"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\": rpc error: code = NotFound desc = could not find container \"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\": container with ID starting with 45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.284621 5034 scope.go:117] "RemoveContainer" containerID="527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.284799 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4"} err="failed to get container status \"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\": rpc error: code = NotFound desc = could not find container \"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\": container with ID starting with 527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.284815 5034 scope.go:117] "RemoveContainer" containerID="7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.285041 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6"} err="failed to get container status \"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\": rpc error: code = NotFound desc = could not find container \"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\": container with ID starting with 7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.285058 5034 scope.go:117] "RemoveContainer" containerID="b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.285401 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db"} err="failed to get container status \"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\": rpc error: code = NotFound desc = could not find container \"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\": container with ID starting with b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.285419 5034 scope.go:117] "RemoveContainer" containerID="0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.285620 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a"} err="failed to get container status \"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\": rpc error: code = NotFound desc = could not find container \"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\": container with ID starting with 0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.285639 5034 scope.go:117] "RemoveContainer" containerID="6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.285819 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380"} err="failed to get container status \"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380\": rpc error: code = NotFound desc = could not find container \"6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380\": container with ID starting with 6d6e482e33a981e891c192b96f965107b950f4cd70b973c8dd3f59ca3fc93380 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.285837 5034 scope.go:117] "RemoveContainer" containerID="409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.286066 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483"} err="failed to get container status \"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\": rpc error: code = NotFound desc = could not find container \"409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483\": container with ID starting with 409adce771b0fd562bc2c76fce742fb46fef03da1180b43eb0d9c972e61f9483 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.286106 5034 scope.go:117] "RemoveContainer" containerID="c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.286339 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9"} err="failed to get container status \"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\": rpc error: code = NotFound desc = could not find container \"c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9\": container with ID starting with c215da64474d10c9e4609a8d3fde7eeae48949ae74991a664310b69ffd6c67f9 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.286356 5034 scope.go:117] "RemoveContainer" containerID="9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.286557 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032"} err="failed to get container status \"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\": rpc error: code = NotFound desc = could not find container \"9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032\": container with ID starting with 9bfaf8eca01bedbf4e005ff40b17a4de966674b4223ca411fc4f06383bfce032 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.286586 5034 scope.go:117] "RemoveContainer" containerID="3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.286868 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288"} err="failed to get container status \"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\": rpc error: code = NotFound desc = could not find container \"3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288\": container with ID starting with 3d586151cebc91606b7a966b8762033996bbbb8450af93e7d3c6625e97d2c288 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.286887 5034 scope.go:117] "RemoveContainer" containerID="45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.287107 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f"} err="failed to get container status \"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\": rpc error: code = NotFound desc = could not find container \"45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f\": container with ID starting with 45cd8c6f9972457acda498d20d6d7c722d10bf383ee89d06d2045c88d048333f not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.287129 5034 scope.go:117] "RemoveContainer" containerID="527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.287384 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4"} err="failed to get container status \"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\": rpc error: code = NotFound desc = could not find container \"527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4\": container with ID starting with 527f76186818e6a2e6c1a49b6c8ec011fc9c6c1c7b923bbe15e1f7344726dab4 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.287403 5034 scope.go:117] "RemoveContainer" containerID="7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.287672 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6"} err="failed to get container status \"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\": rpc error: code = NotFound desc = could not find container \"7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6\": container with ID starting with 7c2563445123ae032104b84f4f6b6d51241a18ca8ea6a0f150f834422efbe9c6 not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.287688 5034 scope.go:117] "RemoveContainer" containerID="b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.287882 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db"} err="failed to get container status \"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\": rpc error: code = NotFound desc = could not find container \"b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db\": container with ID starting with b1cf1d385d620fcb4d23ee97e29817cc26c3e108b261554dd777e7e263de24db not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.287905 5034 scope.go:117] "RemoveContainer" containerID="0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.288119 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a"} err="failed to get container status \"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\": rpc error: code = NotFound desc = could not find container \"0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a\": container with ID starting with 0a14ffb25b5a979ba0cd3130e151353ebf047e6dd5383b37170c048b053db01a not found: ID does not exist" Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.348945 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:28 crc kubenswrapper[5034]: W0105 22:04:28.366298 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9c5399b_f58b_45e7_bc21_8e806513eecc.slice/crio-f0ab27cb3043830ba9d0696c1a3192d4bc57f5193855e3369729641524dde16e WatchSource:0}: Error finding container f0ab27cb3043830ba9d0696c1a3192d4bc57f5193855e3369729641524dde16e: Status 404 returned error can't find the container with id f0ab27cb3043830ba9d0696c1a3192d4bc57f5193855e3369729641524dde16e Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.433239 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6fmfz"] Jan 05 22:04:28 crc kubenswrapper[5034]: I0105 22:04:28.440192 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6fmfz"] Jan 05 22:04:29 crc kubenswrapper[5034]: I0105 22:04:29.097728 5034 generic.go:334] "Generic (PLEG): container finished" podID="a9c5399b-f58b-45e7-bc21-8e806513eecc" containerID="f8e93487cf52f062b95c7366cb46848d3298345483008e1f91e8395a8ad9ee96" exitCode=0 Jan 05 22:04:29 crc kubenswrapper[5034]: I0105 22:04:29.097883 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" event={"ID":"a9c5399b-f58b-45e7-bc21-8e806513eecc","Type":"ContainerDied","Data":"f8e93487cf52f062b95c7366cb46848d3298345483008e1f91e8395a8ad9ee96"} Jan 05 22:04:29 crc kubenswrapper[5034]: I0105 22:04:29.098096 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" event={"ID":"a9c5399b-f58b-45e7-bc21-8e806513eecc","Type":"ContainerStarted","Data":"f0ab27cb3043830ba9d0696c1a3192d4bc57f5193855e3369729641524dde16e"} Jan 05 22:04:29 crc kubenswrapper[5034]: I0105 22:04:29.102338 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tsch6_691cc76e-ed89-4547-9bb1-58b03c8f7932/kube-multus/2.log" Jan 05 22:04:29 crc kubenswrapper[5034]: I0105 22:04:29.102383 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tsch6" event={"ID":"691cc76e-ed89-4547-9bb1-58b03c8f7932","Type":"ContainerStarted","Data":"fe0d8c46c26152a61e70a4e517fbddd67ee50882793115e88c3edc8b220c3b8d"} Jan 05 22:04:29 crc kubenswrapper[5034]: I0105 22:04:29.844785 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="788e0f44-29c3-4c4a-afe9-33c26a965d74" path="/var/lib/kubelet/pods/788e0f44-29c3-4c4a-afe9-33c26a965d74/volumes" Jan 05 22:04:30 crc kubenswrapper[5034]: I0105 22:04:30.110242 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" event={"ID":"a9c5399b-f58b-45e7-bc21-8e806513eecc","Type":"ContainerStarted","Data":"5236139ef8dc22c2c1e79116b31639be21f99a2552125e67c882d6348db6be60"} Jan 05 22:04:30 crc kubenswrapper[5034]: I0105 22:04:30.110279 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" event={"ID":"a9c5399b-f58b-45e7-bc21-8e806513eecc","Type":"ContainerStarted","Data":"786a9d83f9d02fcfe8a4aa3d03869dfa88ccc7491e3cc3a73124f2fcd3fbb971"} Jan 05 22:04:30 crc kubenswrapper[5034]: I0105 22:04:30.110289 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" event={"ID":"a9c5399b-f58b-45e7-bc21-8e806513eecc","Type":"ContainerStarted","Data":"ed853bedbc75de65563641081dfd40cc89197dffc6afe86f7567c5bfc3b5adfa"} Jan 05 22:04:30 crc kubenswrapper[5034]: I0105 22:04:30.110301 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" event={"ID":"a9c5399b-f58b-45e7-bc21-8e806513eecc","Type":"ContainerStarted","Data":"d4459cd75523a58d7ee9dafab9793612e86260c5563f5f0c81286b43bc0fe079"} Jan 05 22:04:30 crc kubenswrapper[5034]: I0105 22:04:30.110309 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" event={"ID":"a9c5399b-f58b-45e7-bc21-8e806513eecc","Type":"ContainerStarted","Data":"41855d8e25ac233b3d7d2aeb7881521f9dcc2ddfa89428168dcd8523143a62ac"} Jan 05 22:04:30 crc kubenswrapper[5034]: I0105 22:04:30.110318 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" event={"ID":"a9c5399b-f58b-45e7-bc21-8e806513eecc","Type":"ContainerStarted","Data":"6c1bb4868723d5d92bbbab5b18f73b42942f9d4ebad9ddcfa19066eac0ff8b82"} Jan 05 22:04:32 crc kubenswrapper[5034]: I0105 22:04:32.123157 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" event={"ID":"a9c5399b-f58b-45e7-bc21-8e806513eecc","Type":"ContainerStarted","Data":"eca9a27c74e819c00137fe6a7600ba5d3d96786cf3e50fef1d197f93b5b33391"} Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.710254 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-d6lqk"] Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.710938 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.715442 5034 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-ck854" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.715495 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.715702 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.715748 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.793654 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b6777338-b2ee-4112-8f06-ea26ba3b8183-node-mnt\") pod \"crc-storage-crc-d6lqk\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.793730 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862jz\" (UniqueName: \"kubernetes.io/projected/b6777338-b2ee-4112-8f06-ea26ba3b8183-kube-api-access-862jz\") pod \"crc-storage-crc-d6lqk\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.793800 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b6777338-b2ee-4112-8f06-ea26ba3b8183-crc-storage\") pod \"crc-storage-crc-d6lqk\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.895558 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b6777338-b2ee-4112-8f06-ea26ba3b8183-crc-storage\") pod \"crc-storage-crc-d6lqk\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.895740 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b6777338-b2ee-4112-8f06-ea26ba3b8183-node-mnt\") pod \"crc-storage-crc-d6lqk\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.895809 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-862jz\" (UniqueName: \"kubernetes.io/projected/b6777338-b2ee-4112-8f06-ea26ba3b8183-kube-api-access-862jz\") pod \"crc-storage-crc-d6lqk\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.896576 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b6777338-b2ee-4112-8f06-ea26ba3b8183-node-mnt\") pod \"crc-storage-crc-d6lqk\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.897173 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b6777338-b2ee-4112-8f06-ea26ba3b8183-crc-storage\") pod \"crc-storage-crc-d6lqk\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:33 crc kubenswrapper[5034]: I0105 22:04:33.915054 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-862jz\" (UniqueName: \"kubernetes.io/projected/b6777338-b2ee-4112-8f06-ea26ba3b8183-kube-api-access-862jz\") pod \"crc-storage-crc-d6lqk\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:34 crc kubenswrapper[5034]: I0105 22:04:34.025747 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:34 crc kubenswrapper[5034]: E0105 22:04:34.047208 5034 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-d6lqk_crc-storage_b6777338-b2ee-4112-8f06-ea26ba3b8183_0(b74f3fd1a30e0c3323b9b36e9cdd1184908bcd10b8c08d09c5afaf4fcadec05c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 22:04:34 crc kubenswrapper[5034]: E0105 22:04:34.047809 5034 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-d6lqk_crc-storage_b6777338-b2ee-4112-8f06-ea26ba3b8183_0(b74f3fd1a30e0c3323b9b36e9cdd1184908bcd10b8c08d09c5afaf4fcadec05c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:34 crc kubenswrapper[5034]: E0105 22:04:34.047883 5034 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-d6lqk_crc-storage_b6777338-b2ee-4112-8f06-ea26ba3b8183_0(b74f3fd1a30e0c3323b9b36e9cdd1184908bcd10b8c08d09c5afaf4fcadec05c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:34 crc kubenswrapper[5034]: E0105 22:04:34.047981 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-d6lqk_crc-storage(b6777338-b2ee-4112-8f06-ea26ba3b8183)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-d6lqk_crc-storage(b6777338-b2ee-4112-8f06-ea26ba3b8183)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-d6lqk_crc-storage_b6777338-b2ee-4112-8f06-ea26ba3b8183_0(b74f3fd1a30e0c3323b9b36e9cdd1184908bcd10b8c08d09c5afaf4fcadec05c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-d6lqk" podUID="b6777338-b2ee-4112-8f06-ea26ba3b8183" Jan 05 22:04:35 crc kubenswrapper[5034]: I0105 22:04:35.082678 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-d6lqk"] Jan 05 22:04:35 crc kubenswrapper[5034]: I0105 22:04:35.086461 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:35 crc kubenswrapper[5034]: I0105 22:04:35.087234 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:35 crc kubenswrapper[5034]: E0105 22:04:35.117617 5034 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-d6lqk_crc-storage_b6777338-b2ee-4112-8f06-ea26ba3b8183_0(d4c50de32875406ebc36f22f8972085c93c276f07881e2497193bfa88e310018): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 22:04:35 crc kubenswrapper[5034]: E0105 22:04:35.117812 5034 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-d6lqk_crc-storage_b6777338-b2ee-4112-8f06-ea26ba3b8183_0(d4c50de32875406ebc36f22f8972085c93c276f07881e2497193bfa88e310018): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:35 crc kubenswrapper[5034]: E0105 22:04:35.117896 5034 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-d6lqk_crc-storage_b6777338-b2ee-4112-8f06-ea26ba3b8183_0(d4c50de32875406ebc36f22f8972085c93c276f07881e2497193bfa88e310018): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:35 crc kubenswrapper[5034]: E0105 22:04:35.118025 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-d6lqk_crc-storage(b6777338-b2ee-4112-8f06-ea26ba3b8183)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-d6lqk_crc-storage(b6777338-b2ee-4112-8f06-ea26ba3b8183)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-d6lqk_crc-storage_b6777338-b2ee-4112-8f06-ea26ba3b8183_0(d4c50de32875406ebc36f22f8972085c93c276f07881e2497193bfa88e310018): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-d6lqk" podUID="b6777338-b2ee-4112-8f06-ea26ba3b8183" Jan 05 22:04:35 crc kubenswrapper[5034]: I0105 22:04:35.140091 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" event={"ID":"a9c5399b-f58b-45e7-bc21-8e806513eecc","Type":"ContainerStarted","Data":"fb9b28c6dc6c43b86108ac8afcf70124cb61b4954e2dfbdf865bc2a9eb78297c"} Jan 05 22:04:35 crc kubenswrapper[5034]: I0105 22:04:35.141183 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:35 crc kubenswrapper[5034]: I0105 22:04:35.141226 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:35 crc kubenswrapper[5034]: I0105 22:04:35.141358 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:35 crc kubenswrapper[5034]: I0105 22:04:35.175293 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" podStartSLOduration=7.175272283 podStartE2EDuration="7.175272283s" podCreationTimestamp="2026-01-05 22:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:04:35.173183262 +0000 UTC m=+767.545182701" watchObservedRunningTime="2026-01-05 22:04:35.175272283 +0000 UTC m=+767.547271712" Jan 05 22:04:35 crc kubenswrapper[5034]: I0105 22:04:35.205847 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:35 crc kubenswrapper[5034]: I0105 22:04:35.207496 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.439275 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m6vz5"] Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.441186 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.450003 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m6vz5"] Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.577411 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh9cw\" (UniqueName: \"kubernetes.io/projected/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-kube-api-access-sh9cw\") pod \"redhat-operators-m6vz5\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.577519 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-catalog-content\") pod \"redhat-operators-m6vz5\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.577565 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-utilities\") pod \"redhat-operators-m6vz5\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.679169 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-catalog-content\") pod \"redhat-operators-m6vz5\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.679704 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-utilities\") pod \"redhat-operators-m6vz5\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.679859 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh9cw\" (UniqueName: \"kubernetes.io/projected/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-kube-api-access-sh9cw\") pod \"redhat-operators-m6vz5\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.680224 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-catalog-content\") pod \"redhat-operators-m6vz5\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.680539 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-utilities\") pod \"redhat-operators-m6vz5\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.703804 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh9cw\" (UniqueName: \"kubernetes.io/projected/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-kube-api-access-sh9cw\") pod \"redhat-operators-m6vz5\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:40 crc kubenswrapper[5034]: I0105 22:04:40.771244 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:41 crc kubenswrapper[5034]: I0105 22:04:41.163012 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m6vz5"] Jan 05 22:04:42 crc kubenswrapper[5034]: I0105 22:04:42.182450 5034 generic.go:334] "Generic (PLEG): container finished" podID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerID="7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92" exitCode=0 Jan 05 22:04:42 crc kubenswrapper[5034]: I0105 22:04:42.182488 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6vz5" event={"ID":"2c668bd9-b1b3-4478-bda1-1402e2a5fafd","Type":"ContainerDied","Data":"7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92"} Jan 05 22:04:42 crc kubenswrapper[5034]: I0105 22:04:42.182517 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6vz5" event={"ID":"2c668bd9-b1b3-4478-bda1-1402e2a5fafd","Type":"ContainerStarted","Data":"e5b53a08fc91dc957fbced5b51cbd1c0ec64873885b198c5174d5ac9cdc75c9d"} Jan 05 22:04:42 crc kubenswrapper[5034]: I0105 22:04:42.184638 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:04:43 crc kubenswrapper[5034]: I0105 22:04:43.189628 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6vz5" event={"ID":"2c668bd9-b1b3-4478-bda1-1402e2a5fafd","Type":"ContainerStarted","Data":"9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3"} Jan 05 22:04:44 crc kubenswrapper[5034]: I0105 22:04:44.196261 5034 generic.go:334] "Generic (PLEG): container finished" podID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerID="9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3" exitCode=0 Jan 05 22:04:44 crc kubenswrapper[5034]: I0105 22:04:44.196300 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6vz5" event={"ID":"2c668bd9-b1b3-4478-bda1-1402e2a5fafd","Type":"ContainerDied","Data":"9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3"} Jan 05 22:04:45 crc kubenswrapper[5034]: I0105 22:04:45.209830 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6vz5" event={"ID":"2c668bd9-b1b3-4478-bda1-1402e2a5fafd","Type":"ContainerStarted","Data":"6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40"} Jan 05 22:04:45 crc kubenswrapper[5034]: I0105 22:04:45.228190 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m6vz5" podStartSLOduration=2.829360894 podStartE2EDuration="5.228170112s" podCreationTimestamp="2026-01-05 22:04:40 +0000 UTC" firstStartedPulling="2026-01-05 22:04:42.184422846 +0000 UTC m=+774.556422285" lastFinishedPulling="2026-01-05 22:04:44.583232064 +0000 UTC m=+776.955231503" observedRunningTime="2026-01-05 22:04:45.226186575 +0000 UTC m=+777.598186024" watchObservedRunningTime="2026-01-05 22:04:45.228170112 +0000 UTC m=+777.600169551" Jan 05 22:04:47 crc kubenswrapper[5034]: I0105 22:04:47.838693 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:47 crc kubenswrapper[5034]: I0105 22:04:47.841717 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:48 crc kubenswrapper[5034]: I0105 22:04:48.067330 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-d6lqk"] Jan 05 22:04:48 crc kubenswrapper[5034]: W0105 22:04:48.070747 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6777338_b2ee_4112_8f06_ea26ba3b8183.slice/crio-f9bb61cbbd8e1dd6bb4e9b35d729af316a02e526ea2c3061ad839058d8176237 WatchSource:0}: Error finding container f9bb61cbbd8e1dd6bb4e9b35d729af316a02e526ea2c3061ad839058d8176237: Status 404 returned error can't find the container with id f9bb61cbbd8e1dd6bb4e9b35d729af316a02e526ea2c3061ad839058d8176237 Jan 05 22:04:48 crc kubenswrapper[5034]: I0105 22:04:48.228768 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-d6lqk" event={"ID":"b6777338-b2ee-4112-8f06-ea26ba3b8183","Type":"ContainerStarted","Data":"f9bb61cbbd8e1dd6bb4e9b35d729af316a02e526ea2c3061ad839058d8176237"} Jan 05 22:04:50 crc kubenswrapper[5034]: I0105 22:04:50.238182 5034 generic.go:334] "Generic (PLEG): container finished" podID="b6777338-b2ee-4112-8f06-ea26ba3b8183" containerID="aea9a008038f78ab0f46bcdb1a7b3d42296f27d32b9c0e567c8144ae697e1f97" exitCode=0 Jan 05 22:04:50 crc kubenswrapper[5034]: I0105 22:04:50.238287 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-d6lqk" event={"ID":"b6777338-b2ee-4112-8f06-ea26ba3b8183","Type":"ContainerDied","Data":"aea9a008038f78ab0f46bcdb1a7b3d42296f27d32b9c0e567c8144ae697e1f97"} Jan 05 22:04:50 crc kubenswrapper[5034]: I0105 22:04:50.469135 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:04:50 crc kubenswrapper[5034]: I0105 22:04:50.469217 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:04:50 crc kubenswrapper[5034]: I0105 22:04:50.469271 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:04:50 crc kubenswrapper[5034]: I0105 22:04:50.469965 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88445724b2a970c08e5f2c6402ea7e57704a8e3d0fb29457d3eb885ad064167b"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:04:50 crc kubenswrapper[5034]: I0105 22:04:50.470045 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://88445724b2a970c08e5f2c6402ea7e57704a8e3d0fb29457d3eb885ad064167b" gracePeriod=600 Jan 05 22:04:50 crc kubenswrapper[5034]: I0105 22:04:50.772187 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:50 crc kubenswrapper[5034]: I0105 22:04:50.772629 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:50 crc kubenswrapper[5034]: I0105 22:04:50.816633 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.246893 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="88445724b2a970c08e5f2c6402ea7e57704a8e3d0fb29457d3eb885ad064167b" exitCode=0 Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.246965 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"88445724b2a970c08e5f2c6402ea7e57704a8e3d0fb29457d3eb885ad064167b"} Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.247003 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"c2ae88310c27c8bb417de34e2de1e513ef4f2cf46c667d74e4ed38e85d67a96f"} Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.247025 5034 scope.go:117] "RemoveContainer" containerID="c36642d80caae31121949484a500455715981b05431b5b90c4a54cf3db825275" Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.298526 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.350782 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m6vz5"] Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.466612 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.614676 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-862jz\" (UniqueName: \"kubernetes.io/projected/b6777338-b2ee-4112-8f06-ea26ba3b8183-kube-api-access-862jz\") pod \"b6777338-b2ee-4112-8f06-ea26ba3b8183\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.614752 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b6777338-b2ee-4112-8f06-ea26ba3b8183-node-mnt\") pod \"b6777338-b2ee-4112-8f06-ea26ba3b8183\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.614830 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b6777338-b2ee-4112-8f06-ea26ba3b8183-crc-storage\") pod \"b6777338-b2ee-4112-8f06-ea26ba3b8183\" (UID: \"b6777338-b2ee-4112-8f06-ea26ba3b8183\") " Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.614919 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6777338-b2ee-4112-8f06-ea26ba3b8183-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b6777338-b2ee-4112-8f06-ea26ba3b8183" (UID: "b6777338-b2ee-4112-8f06-ea26ba3b8183"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.615058 5034 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b6777338-b2ee-4112-8f06-ea26ba3b8183-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.619864 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6777338-b2ee-4112-8f06-ea26ba3b8183-kube-api-access-862jz" (OuterVolumeSpecName: "kube-api-access-862jz") pod "b6777338-b2ee-4112-8f06-ea26ba3b8183" (UID: "b6777338-b2ee-4112-8f06-ea26ba3b8183"). InnerVolumeSpecName "kube-api-access-862jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.627746 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6777338-b2ee-4112-8f06-ea26ba3b8183-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b6777338-b2ee-4112-8f06-ea26ba3b8183" (UID: "b6777338-b2ee-4112-8f06-ea26ba3b8183"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.716286 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-862jz\" (UniqueName: \"kubernetes.io/projected/b6777338-b2ee-4112-8f06-ea26ba3b8183-kube-api-access-862jz\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:51 crc kubenswrapper[5034]: I0105 22:04:51.716315 5034 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b6777338-b2ee-4112-8f06-ea26ba3b8183-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:52 crc kubenswrapper[5034]: I0105 22:04:52.252306 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-d6lqk" event={"ID":"b6777338-b2ee-4112-8f06-ea26ba3b8183","Type":"ContainerDied","Data":"f9bb61cbbd8e1dd6bb4e9b35d729af316a02e526ea2c3061ad839058d8176237"} Jan 05 22:04:52 crc kubenswrapper[5034]: I0105 22:04:52.252524 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9bb61cbbd8e1dd6bb4e9b35d729af316a02e526ea2c3061ad839058d8176237" Jan 05 22:04:52 crc kubenswrapper[5034]: I0105 22:04:52.252330 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d6lqk" Jan 05 22:04:53 crc kubenswrapper[5034]: I0105 22:04:53.260747 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m6vz5" podUID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerName="registry-server" containerID="cri-o://6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40" gracePeriod=2 Jan 05 22:04:53 crc kubenswrapper[5034]: I0105 22:04:53.579105 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:53 crc kubenswrapper[5034]: I0105 22:04:53.644005 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh9cw\" (UniqueName: \"kubernetes.io/projected/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-kube-api-access-sh9cw\") pod \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " Jan 05 22:04:53 crc kubenswrapper[5034]: I0105 22:04:53.644212 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-utilities\") pod \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " Jan 05 22:04:53 crc kubenswrapper[5034]: I0105 22:04:53.644237 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-catalog-content\") pod \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\" (UID: \"2c668bd9-b1b3-4478-bda1-1402e2a5fafd\") " Jan 05 22:04:53 crc kubenswrapper[5034]: I0105 22:04:53.645046 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-utilities" (OuterVolumeSpecName: "utilities") pod "2c668bd9-b1b3-4478-bda1-1402e2a5fafd" (UID: "2c668bd9-b1b3-4478-bda1-1402e2a5fafd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:04:53 crc kubenswrapper[5034]: I0105 22:04:53.650145 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-kube-api-access-sh9cw" (OuterVolumeSpecName: "kube-api-access-sh9cw") pod "2c668bd9-b1b3-4478-bda1-1402e2a5fafd" (UID: "2c668bd9-b1b3-4478-bda1-1402e2a5fafd"). InnerVolumeSpecName "kube-api-access-sh9cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:04:53 crc kubenswrapper[5034]: I0105 22:04:53.745499 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:53 crc kubenswrapper[5034]: I0105 22:04:53.745536 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh9cw\" (UniqueName: \"kubernetes.io/projected/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-kube-api-access-sh9cw\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.268026 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6vz5" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.268049 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6vz5" event={"ID":"2c668bd9-b1b3-4478-bda1-1402e2a5fafd","Type":"ContainerDied","Data":"6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40"} Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.268122 5034 scope.go:117] "RemoveContainer" containerID="6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.268215 5034 generic.go:334] "Generic (PLEG): container finished" podID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerID="6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40" exitCode=0 Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.268276 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6vz5" event={"ID":"2c668bd9-b1b3-4478-bda1-1402e2a5fafd","Type":"ContainerDied","Data":"e5b53a08fc91dc957fbced5b51cbd1c0ec64873885b198c5174d5ac9cdc75c9d"} Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.282006 5034 scope.go:117] "RemoveContainer" containerID="9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.295982 5034 scope.go:117] "RemoveContainer" containerID="7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.313295 5034 scope.go:117] "RemoveContainer" containerID="6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40" Jan 05 22:04:54 crc kubenswrapper[5034]: E0105 22:04:54.313764 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40\": container with ID starting with 6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40 not found: ID does not exist" containerID="6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.313819 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40"} err="failed to get container status \"6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40\": rpc error: code = NotFound desc = could not find container \"6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40\": container with ID starting with 6a9ff84559cc6ddfebeb8e0a863c7a1caf157438d6ac6c3be5fae3be5dfe2a40 not found: ID does not exist" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.313858 5034 scope.go:117] "RemoveContainer" containerID="9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3" Jan 05 22:04:54 crc kubenswrapper[5034]: E0105 22:04:54.314210 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3\": container with ID starting with 9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3 not found: ID does not exist" containerID="9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.314255 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3"} err="failed to get container status \"9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3\": rpc error: code = NotFound desc = could not find container \"9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3\": container with ID starting with 9a1075a3cb1b62a848d5c48d9c6bd92fc396cc03548f47fa1a8e624a2cd8a8c3 not found: ID does not exist" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.314290 5034 scope.go:117] "RemoveContainer" containerID="7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92" Jan 05 22:04:54 crc kubenswrapper[5034]: E0105 22:04:54.314635 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92\": container with ID starting with 7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92 not found: ID does not exist" containerID="7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.314670 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92"} err="failed to get container status \"7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92\": rpc error: code = NotFound desc = could not find container \"7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92\": container with ID starting with 7ce5d39ac8b024bcd0be49b80320fffe1dda825ec559db97d4e75e9015358f92 not found: ID does not exist" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.945405 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c668bd9-b1b3-4478-bda1-1402e2a5fafd" (UID: "2c668bd9-b1b3-4478-bda1-1402e2a5fafd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:04:54 crc kubenswrapper[5034]: I0105 22:04:54.960489 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c668bd9-b1b3-4478-bda1-1402e2a5fafd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:55 crc kubenswrapper[5034]: I0105 22:04:55.195940 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m6vz5"] Jan 05 22:04:55 crc kubenswrapper[5034]: I0105 22:04:55.200410 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m6vz5"] Jan 05 22:04:55 crc kubenswrapper[5034]: I0105 22:04:55.844603 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" path="/var/lib/kubelet/pods/2c668bd9-b1b3-4478-bda1-1402e2a5fafd/volumes" Jan 05 22:04:58 crc kubenswrapper[5034]: I0105 22:04:58.372076 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ll5rd" Jan 05 22:04:59 crc kubenswrapper[5034]: I0105 22:04:59.992482 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4"] Jan 05 22:04:59 crc kubenswrapper[5034]: E0105 22:04:59.993017 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6777338-b2ee-4112-8f06-ea26ba3b8183" containerName="storage" Jan 05 22:04:59 crc kubenswrapper[5034]: I0105 22:04:59.993033 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6777338-b2ee-4112-8f06-ea26ba3b8183" containerName="storage" Jan 05 22:04:59 crc kubenswrapper[5034]: E0105 22:04:59.993047 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerName="extract-utilities" Jan 05 22:04:59 crc kubenswrapper[5034]: I0105 22:04:59.993055 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerName="extract-utilities" Jan 05 22:04:59 crc kubenswrapper[5034]: E0105 22:04:59.993068 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerName="extract-content" Jan 05 22:04:59 crc kubenswrapper[5034]: I0105 22:04:59.993082 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerName="extract-content" Jan 05 22:04:59 crc kubenswrapper[5034]: E0105 22:04:59.993112 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerName="registry-server" Jan 05 22:04:59 crc kubenswrapper[5034]: I0105 22:04:59.993119 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerName="registry-server" Jan 05 22:04:59 crc kubenswrapper[5034]: I0105 22:04:59.993226 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6777338-b2ee-4112-8f06-ea26ba3b8183" containerName="storage" Jan 05 22:04:59 crc kubenswrapper[5034]: I0105 22:04:59.993241 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c668bd9-b1b3-4478-bda1-1402e2a5fafd" containerName="registry-server" Jan 05 22:04:59 crc kubenswrapper[5034]: I0105 22:04:59.994066 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.003461 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4"] Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.008400 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.123676 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.123744 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.123792 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4hz\" (UniqueName: \"kubernetes.io/projected/5b029c07-fd45-41d1-a25f-9a0653f3c70b-kube-api-access-rm4hz\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.224498 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.224568 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.224623 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm4hz\" (UniqueName: \"kubernetes.io/projected/5b029c07-fd45-41d1-a25f-9a0653f3c70b-kube-api-access-rm4hz\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.225146 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.225175 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.243253 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm4hz\" (UniqueName: \"kubernetes.io/projected/5b029c07-fd45-41d1-a25f-9a0653f3c70b-kube-api-access-rm4hz\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.308213 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:00 crc kubenswrapper[5034]: I0105 22:05:00.681563 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4"] Jan 05 22:05:01 crc kubenswrapper[5034]: I0105 22:05:01.304797 5034 generic.go:334] "Generic (PLEG): container finished" podID="5b029c07-fd45-41d1-a25f-9a0653f3c70b" containerID="26017b2381cac6fb3767ddb76cba9e8d414df947bd7ec46df4bb2fd9db918b38" exitCode=0 Jan 05 22:05:01 crc kubenswrapper[5034]: I0105 22:05:01.304838 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" event={"ID":"5b029c07-fd45-41d1-a25f-9a0653f3c70b","Type":"ContainerDied","Data":"26017b2381cac6fb3767ddb76cba9e8d414df947bd7ec46df4bb2fd9db918b38"} Jan 05 22:05:01 crc kubenswrapper[5034]: I0105 22:05:01.304978 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" event={"ID":"5b029c07-fd45-41d1-a25f-9a0653f3c70b","Type":"ContainerStarted","Data":"0a02266a728ed8b2d60866447199265573aef6868920df8bcbd5bc65b262030f"} Jan 05 22:05:03 crc kubenswrapper[5034]: I0105 22:05:03.315683 5034 generic.go:334] "Generic (PLEG): container finished" podID="5b029c07-fd45-41d1-a25f-9a0653f3c70b" containerID="6e9caf5801337e0f28ca501ca80f18a751300baa7c986867cadc1d38a59e5651" exitCode=0 Jan 05 22:05:03 crc kubenswrapper[5034]: I0105 22:05:03.315792 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" event={"ID":"5b029c07-fd45-41d1-a25f-9a0653f3c70b","Type":"ContainerDied","Data":"6e9caf5801337e0f28ca501ca80f18a751300baa7c986867cadc1d38a59e5651"} Jan 05 22:05:04 crc kubenswrapper[5034]: I0105 22:05:04.322898 5034 generic.go:334] "Generic (PLEG): container finished" podID="5b029c07-fd45-41d1-a25f-9a0653f3c70b" containerID="5bff8f9cf4932bb83e0c99640146c0b72b698a2cd84a7b814734d0bb8a57a126" exitCode=0 Jan 05 22:05:04 crc kubenswrapper[5034]: I0105 22:05:04.322959 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" event={"ID":"5b029c07-fd45-41d1-a25f-9a0653f3c70b","Type":"ContainerDied","Data":"5bff8f9cf4932bb83e0c99640146c0b72b698a2cd84a7b814734d0bb8a57a126"} Jan 05 22:05:05 crc kubenswrapper[5034]: I0105 22:05:05.568589 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:05 crc kubenswrapper[5034]: I0105 22:05:05.695155 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-bundle\") pod \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " Jan 05 22:05:05 crc kubenswrapper[5034]: I0105 22:05:05.695292 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-util\") pod \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " Jan 05 22:05:05 crc kubenswrapper[5034]: I0105 22:05:05.695362 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm4hz\" (UniqueName: \"kubernetes.io/projected/5b029c07-fd45-41d1-a25f-9a0653f3c70b-kube-api-access-rm4hz\") pod \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\" (UID: \"5b029c07-fd45-41d1-a25f-9a0653f3c70b\") " Jan 05 22:05:05 crc kubenswrapper[5034]: I0105 22:05:05.695870 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-bundle" (OuterVolumeSpecName: "bundle") pod "5b029c07-fd45-41d1-a25f-9a0653f3c70b" (UID: "5b029c07-fd45-41d1-a25f-9a0653f3c70b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:05:05 crc kubenswrapper[5034]: I0105 22:05:05.702059 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b029c07-fd45-41d1-a25f-9a0653f3c70b-kube-api-access-rm4hz" (OuterVolumeSpecName: "kube-api-access-rm4hz") pod "5b029c07-fd45-41d1-a25f-9a0653f3c70b" (UID: "5b029c07-fd45-41d1-a25f-9a0653f3c70b"). InnerVolumeSpecName "kube-api-access-rm4hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:05:05 crc kubenswrapper[5034]: I0105 22:05:05.717878 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-util" (OuterVolumeSpecName: "util") pod "5b029c07-fd45-41d1-a25f-9a0653f3c70b" (UID: "5b029c07-fd45-41d1-a25f-9a0653f3c70b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:05:05 crc kubenswrapper[5034]: I0105 22:05:05.797229 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm4hz\" (UniqueName: \"kubernetes.io/projected/5b029c07-fd45-41d1-a25f-9a0653f3c70b-kube-api-access-rm4hz\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:05 crc kubenswrapper[5034]: I0105 22:05:05.797260 5034 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:05 crc kubenswrapper[5034]: I0105 22:05:05.797269 5034 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b029c07-fd45-41d1-a25f-9a0653f3c70b-util\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:06 crc kubenswrapper[5034]: I0105 22:05:06.335292 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" event={"ID":"5b029c07-fd45-41d1-a25f-9a0653f3c70b","Type":"ContainerDied","Data":"0a02266a728ed8b2d60866447199265573aef6868920df8bcbd5bc65b262030f"} Jan 05 22:05:06 crc kubenswrapper[5034]: I0105 22:05:06.335579 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a02266a728ed8b2d60866447199265573aef6868920df8bcbd5bc65b262030f" Jan 05 22:05:06 crc kubenswrapper[5034]: I0105 22:05:06.335342 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.586898 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-9fngq"] Jan 05 22:05:07 crc kubenswrapper[5034]: E0105 22:05:07.587169 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b029c07-fd45-41d1-a25f-9a0653f3c70b" containerName="extract" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.587184 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b029c07-fd45-41d1-a25f-9a0653f3c70b" containerName="extract" Jan 05 22:05:07 crc kubenswrapper[5034]: E0105 22:05:07.587192 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b029c07-fd45-41d1-a25f-9a0653f3c70b" containerName="pull" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.587197 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b029c07-fd45-41d1-a25f-9a0653f3c70b" containerName="pull" Jan 05 22:05:07 crc kubenswrapper[5034]: E0105 22:05:07.587210 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b029c07-fd45-41d1-a25f-9a0653f3c70b" containerName="util" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.587216 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b029c07-fd45-41d1-a25f-9a0653f3c70b" containerName="util" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.587300 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b029c07-fd45-41d1-a25f-9a0653f3c70b" containerName="extract" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.587699 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-9fngq" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.589391 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.589843 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.589952 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-nqth8" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.604526 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-9fngq"] Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.721881 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95hrg\" (UniqueName: \"kubernetes.io/projected/87c765d8-d039-4341-9980-e2b22b54ceac-kube-api-access-95hrg\") pod \"nmstate-operator-6769fb99d-9fngq\" (UID: \"87c765d8-d039-4341-9980-e2b22b54ceac\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-9fngq" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.823211 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95hrg\" (UniqueName: \"kubernetes.io/projected/87c765d8-d039-4341-9980-e2b22b54ceac-kube-api-access-95hrg\") pod \"nmstate-operator-6769fb99d-9fngq\" (UID: \"87c765d8-d039-4341-9980-e2b22b54ceac\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-9fngq" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.847053 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95hrg\" (UniqueName: \"kubernetes.io/projected/87c765d8-d039-4341-9980-e2b22b54ceac-kube-api-access-95hrg\") pod \"nmstate-operator-6769fb99d-9fngq\" (UID: \"87c765d8-d039-4341-9980-e2b22b54ceac\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-9fngq" Jan 05 22:05:07 crc kubenswrapper[5034]: I0105 22:05:07.902768 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-9fngq" Jan 05 22:05:08 crc kubenswrapper[5034]: I0105 22:05:08.092956 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-9fngq"] Jan 05 22:05:08 crc kubenswrapper[5034]: I0105 22:05:08.346239 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-9fngq" event={"ID":"87c765d8-d039-4341-9980-e2b22b54ceac","Type":"ContainerStarted","Data":"e36df45b861930f4e8a46529ff967299e10925fd24f9a08d7a7eead1c081ecce"} Jan 05 22:05:12 crc kubenswrapper[5034]: I0105 22:05:12.367202 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-9fngq" event={"ID":"87c765d8-d039-4341-9980-e2b22b54ceac","Type":"ContainerStarted","Data":"fda69d2505190a16476de39fe8ee2322a99fed46a7d6c5597ab2edd0b5e0cfb5"} Jan 05 22:05:12 crc kubenswrapper[5034]: I0105 22:05:12.385044 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-9fngq" podStartSLOduration=1.545711233 podStartE2EDuration="5.385025457s" podCreationTimestamp="2026-01-05 22:05:07 +0000 UTC" firstStartedPulling="2026-01-05 22:05:08.104733748 +0000 UTC m=+800.476733187" lastFinishedPulling="2026-01-05 22:05:11.944047972 +0000 UTC m=+804.316047411" observedRunningTime="2026-01-05 22:05:12.380838668 +0000 UTC m=+804.752838107" watchObservedRunningTime="2026-01-05 22:05:12.385025457 +0000 UTC m=+804.757024906" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.356606 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-65khr"] Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.357755 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-65khr" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.359574 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fd6v5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.376917 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-65khr"] Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.383435 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5"] Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.384293 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.386742 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.388664 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7cn7q"] Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.389393 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.396108 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5"] Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.492211 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-bw8c5\" (UID: \"f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.492298 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxz9\" (UniqueName: \"kubernetes.io/projected/f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb-kube-api-access-5pxz9\") pod \"nmstate-webhook-f8fb84555-bw8c5\" (UID: \"f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.492336 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/04881f29-9640-4518-bf55-f893f4f27c26-ovs-socket\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.492355 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl6b5\" (UniqueName: \"kubernetes.io/projected/d6e6c223-89b8-4e35-a4cf-1442486c98dd-kube-api-access-zl6b5\") pod \"nmstate-metrics-7f7f7578db-65khr\" (UID: \"d6e6c223-89b8-4e35-a4cf-1442486c98dd\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-65khr" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.492406 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch75h\" (UniqueName: \"kubernetes.io/projected/04881f29-9640-4518-bf55-f893f4f27c26-kube-api-access-ch75h\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.492422 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/04881f29-9640-4518-bf55-f893f4f27c26-nmstate-lock\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.492440 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/04881f29-9640-4518-bf55-f893f4f27c26-dbus-socket\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.500754 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv"] Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.501505 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.503974 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.505362 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cxldb" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.505539 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.513816 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv"] Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594275 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrrh7\" (UniqueName: \"kubernetes.io/projected/9899044f-5e88-4777-b627-f7dcc60960a5-kube-api-access-nrrh7\") pod \"nmstate-console-plugin-6ff7998486-f97xv\" (UID: \"9899044f-5e88-4777-b627-f7dcc60960a5\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594329 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-bw8c5\" (UID: \"f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594352 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9899044f-5e88-4777-b627-f7dcc60960a5-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-f97xv\" (UID: \"9899044f-5e88-4777-b627-f7dcc60960a5\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594542 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxz9\" (UniqueName: \"kubernetes.io/projected/f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb-kube-api-access-5pxz9\") pod \"nmstate-webhook-f8fb84555-bw8c5\" (UID: \"f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594635 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/04881f29-9640-4518-bf55-f893f4f27c26-ovs-socket\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594684 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl6b5\" (UniqueName: \"kubernetes.io/projected/d6e6c223-89b8-4e35-a4cf-1442486c98dd-kube-api-access-zl6b5\") pod \"nmstate-metrics-7f7f7578db-65khr\" (UID: \"d6e6c223-89b8-4e35-a4cf-1442486c98dd\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-65khr" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594723 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch75h\" (UniqueName: \"kubernetes.io/projected/04881f29-9640-4518-bf55-f893f4f27c26-kube-api-access-ch75h\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594755 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9899044f-5e88-4777-b627-f7dcc60960a5-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-f97xv\" (UID: \"9899044f-5e88-4777-b627-f7dcc60960a5\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594774 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/04881f29-9640-4518-bf55-f893f4f27c26-nmstate-lock\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594792 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/04881f29-9640-4518-bf55-f893f4f27c26-dbus-socket\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.594753 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/04881f29-9640-4518-bf55-f893f4f27c26-ovs-socket\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.595095 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/04881f29-9640-4518-bf55-f893f4f27c26-nmstate-lock\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.595165 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/04881f29-9640-4518-bf55-f893f4f27c26-dbus-socket\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.600169 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-bw8c5\" (UID: \"f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.615366 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl6b5\" (UniqueName: \"kubernetes.io/projected/d6e6c223-89b8-4e35-a4cf-1442486c98dd-kube-api-access-zl6b5\") pod \"nmstate-metrics-7f7f7578db-65khr\" (UID: \"d6e6c223-89b8-4e35-a4cf-1442486c98dd\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-65khr" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.615774 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch75h\" (UniqueName: \"kubernetes.io/projected/04881f29-9640-4518-bf55-f893f4f27c26-kube-api-access-ch75h\") pod \"nmstate-handler-7cn7q\" (UID: \"04881f29-9640-4518-bf55-f893f4f27c26\") " pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.622526 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxz9\" (UniqueName: \"kubernetes.io/projected/f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb-kube-api-access-5pxz9\") pod \"nmstate-webhook-f8fb84555-bw8c5\" (UID: \"f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.674036 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-65khr" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.696213 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9899044f-5e88-4777-b627-f7dcc60960a5-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-f97xv\" (UID: \"9899044f-5e88-4777-b627-f7dcc60960a5\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.696263 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrrh7\" (UniqueName: \"kubernetes.io/projected/9899044f-5e88-4777-b627-f7dcc60960a5-kube-api-access-nrrh7\") pod \"nmstate-console-plugin-6ff7998486-f97xv\" (UID: \"9899044f-5e88-4777-b627-f7dcc60960a5\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.696294 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9899044f-5e88-4777-b627-f7dcc60960a5-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-f97xv\" (UID: \"9899044f-5e88-4777-b627-f7dcc60960a5\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:13 crc kubenswrapper[5034]: E0105 22:05:13.696429 5034 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 05 22:05:13 crc kubenswrapper[5034]: E0105 22:05:13.696498 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9899044f-5e88-4777-b627-f7dcc60960a5-plugin-serving-cert podName:9899044f-5e88-4777-b627-f7dcc60960a5 nodeName:}" failed. No retries permitted until 2026-01-05 22:05:14.196479035 +0000 UTC m=+806.568478474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/9899044f-5e88-4777-b627-f7dcc60960a5-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-f97xv" (UID: "9899044f-5e88-4777-b627-f7dcc60960a5") : secret "plugin-serving-cert" not found Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.696966 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69866dbfb5-9mzp5"] Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.697263 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9899044f-5e88-4777-b627-f7dcc60960a5-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-f97xv\" (UID: \"9899044f-5e88-4777-b627-f7dcc60960a5\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.698211 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.701235 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.716992 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.721175 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrrh7\" (UniqueName: \"kubernetes.io/projected/9899044f-5e88-4777-b627-f7dcc60960a5-kube-api-access-nrrh7\") pod \"nmstate-console-plugin-6ff7998486-f97xv\" (UID: \"9899044f-5e88-4777-b627-f7dcc60960a5\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.730736 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69866dbfb5-9mzp5"] Jan 05 22:05:13 crc kubenswrapper[5034]: W0105 22:05:13.765730 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04881f29_9640_4518_bf55_f893f4f27c26.slice/crio-6ba0c693da3c366d857bf9197a255f0fb038e82fac1ee9e2565a41d51576ac5e WatchSource:0}: Error finding container 6ba0c693da3c366d857bf9197a255f0fb038e82fac1ee9e2565a41d51576ac5e: Status 404 returned error can't find the container with id 6ba0c693da3c366d857bf9197a255f0fb038e82fac1ee9e2565a41d51576ac5e Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.799154 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-oauth-serving-cert\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.799432 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-console-config\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.799480 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-trusted-ca-bundle\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.799559 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-service-ca\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.799603 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblx7\" (UniqueName: \"kubernetes.io/projected/4c65972c-3b5d-4f96-9c30-ece8858f150e-kube-api-access-rblx7\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.799652 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c65972c-3b5d-4f96-9c30-ece8858f150e-console-oauth-config\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.799677 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c65972c-3b5d-4f96-9c30-ece8858f150e-console-serving-cert\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.900512 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-service-ca\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.900594 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rblx7\" (UniqueName: \"kubernetes.io/projected/4c65972c-3b5d-4f96-9c30-ece8858f150e-kube-api-access-rblx7\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.900657 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c65972c-3b5d-4f96-9c30-ece8858f150e-console-oauth-config\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.900675 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c65972c-3b5d-4f96-9c30-ece8858f150e-console-serving-cert\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.900709 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-console-config\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.900733 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-oauth-serving-cert\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.900751 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-trusted-ca-bundle\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.901762 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-service-ca\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.901906 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-trusted-ca-bundle\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.903007 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-console-config\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.903898 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c65972c-3b5d-4f96-9c30-ece8858f150e-oauth-serving-cert\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.906452 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c65972c-3b5d-4f96-9c30-ece8858f150e-console-oauth-config\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.906930 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c65972c-3b5d-4f96-9c30-ece8858f150e-console-serving-cert\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:13 crc kubenswrapper[5034]: I0105 22:05:13.920209 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rblx7\" (UniqueName: \"kubernetes.io/projected/4c65972c-3b5d-4f96-9c30-ece8858f150e-kube-api-access-rblx7\") pod \"console-69866dbfb5-9mzp5\" (UID: \"4c65972c-3b5d-4f96-9c30-ece8858f150e\") " pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.001265 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-65khr"] Jan 05 22:05:14 crc kubenswrapper[5034]: W0105 22:05:14.007962 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e6c223_89b8_4e35_a4cf_1442486c98dd.slice/crio-989d4b61f2ca5757e2130fb8eb6ec80083613b1d7886301b55a0d0a54b80fd31 WatchSource:0}: Error finding container 989d4b61f2ca5757e2130fb8eb6ec80083613b1d7886301b55a0d0a54b80fd31: Status 404 returned error can't find the container with id 989d4b61f2ca5757e2130fb8eb6ec80083613b1d7886301b55a0d0a54b80fd31 Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.039337 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5"] Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.043777 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.204749 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9899044f-5e88-4777-b627-f7dcc60960a5-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-f97xv\" (UID: \"9899044f-5e88-4777-b627-f7dcc60960a5\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.210432 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9899044f-5e88-4777-b627-f7dcc60960a5-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-f97xv\" (UID: \"9899044f-5e88-4777-b627-f7dcc60960a5\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.237450 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69866dbfb5-9mzp5"] Jan 05 22:05:14 crc kubenswrapper[5034]: W0105 22:05:14.241279 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c65972c_3b5d_4f96_9c30_ece8858f150e.slice/crio-a30d6b63166e0fd9c2da433d3dbc9a44e2381510b17738c899feed074e822e46 WatchSource:0}: Error finding container a30d6b63166e0fd9c2da433d3dbc9a44e2381510b17738c899feed074e822e46: Status 404 returned error can't find the container with id a30d6b63166e0fd9c2da433d3dbc9a44e2381510b17738c899feed074e822e46 Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.378727 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-65khr" event={"ID":"d6e6c223-89b8-4e35-a4cf-1442486c98dd","Type":"ContainerStarted","Data":"989d4b61f2ca5757e2130fb8eb6ec80083613b1d7886301b55a0d0a54b80fd31"} Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.380477 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69866dbfb5-9mzp5" event={"ID":"4c65972c-3b5d-4f96-9c30-ece8858f150e","Type":"ContainerStarted","Data":"a30d6b63166e0fd9c2da433d3dbc9a44e2381510b17738c899feed074e822e46"} Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.382623 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7cn7q" event={"ID":"04881f29-9640-4518-bf55-f893f4f27c26","Type":"ContainerStarted","Data":"6ba0c693da3c366d857bf9197a255f0fb038e82fac1ee9e2565a41d51576ac5e"} Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.383919 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" event={"ID":"f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb","Type":"ContainerStarted","Data":"52720a9d404b0decbbee21b22b3472cf82d1b4dc6c3c9b43469d01165cf0a4ef"} Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.416715 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" Jan 05 22:05:14 crc kubenswrapper[5034]: I0105 22:05:14.621991 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv"] Jan 05 22:05:14 crc kubenswrapper[5034]: W0105 22:05:14.632969 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9899044f_5e88_4777_b627_f7dcc60960a5.slice/crio-497c17032cc9c09859a5a5882655a6ba3458c11c92844af8dc92d96b7b02b80b WatchSource:0}: Error finding container 497c17032cc9c09859a5a5882655a6ba3458c11c92844af8dc92d96b7b02b80b: Status 404 returned error can't find the container with id 497c17032cc9c09859a5a5882655a6ba3458c11c92844af8dc92d96b7b02b80b Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.243528 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g7fph"] Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.244571 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.252847 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7fph"] Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.318838 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lct4t\" (UniqueName: \"kubernetes.io/projected/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-kube-api-access-lct4t\") pod \"redhat-marketplace-g7fph\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.319369 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-utilities\") pod \"redhat-marketplace-g7fph\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.319505 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-catalog-content\") pod \"redhat-marketplace-g7fph\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.403038 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69866dbfb5-9mzp5" event={"ID":"4c65972c-3b5d-4f96-9c30-ece8858f150e","Type":"ContainerStarted","Data":"57f9ddff4b769692006de74b58b28b57d4d03634d629bfbbd1a9bff471a1f665"} Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.404995 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" event={"ID":"9899044f-5e88-4777-b627-f7dcc60960a5","Type":"ContainerStarted","Data":"497c17032cc9c09859a5a5882655a6ba3458c11c92844af8dc92d96b7b02b80b"} Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.421710 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-catalog-content\") pod \"redhat-marketplace-g7fph\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.421764 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lct4t\" (UniqueName: \"kubernetes.io/projected/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-kube-api-access-lct4t\") pod \"redhat-marketplace-g7fph\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.421812 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-utilities\") pod \"redhat-marketplace-g7fph\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.422303 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-utilities\") pod \"redhat-marketplace-g7fph\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.422754 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-catalog-content\") pod \"redhat-marketplace-g7fph\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.436668 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69866dbfb5-9mzp5" podStartSLOduration=2.436644749 podStartE2EDuration="2.436644749s" podCreationTimestamp="2026-01-05 22:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:05:15.423622379 +0000 UTC m=+807.795621818" watchObservedRunningTime="2026-01-05 22:05:15.436644749 +0000 UTC m=+807.808644198" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.454646 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lct4t\" (UniqueName: \"kubernetes.io/projected/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-kube-api-access-lct4t\") pod \"redhat-marketplace-g7fph\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.583494 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:15 crc kubenswrapper[5034]: I0105 22:05:15.813967 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7fph"] Jan 05 22:05:15 crc kubenswrapper[5034]: W0105 22:05:15.832409 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod067f49e5_426b_4da9_96a2_b1b4a8ebbde0.slice/crio-0814b7597b649256f1b6b83e09b87ecf08dbcd26645f48e2a1988a4ce748be54 WatchSource:0}: Error finding container 0814b7597b649256f1b6b83e09b87ecf08dbcd26645f48e2a1988a4ce748be54: Status 404 returned error can't find the container with id 0814b7597b649256f1b6b83e09b87ecf08dbcd26645f48e2a1988a4ce748be54 Jan 05 22:05:16 crc kubenswrapper[5034]: I0105 22:05:16.412053 5034 generic.go:334] "Generic (PLEG): container finished" podID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerID="46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb" exitCode=0 Jan 05 22:05:16 crc kubenswrapper[5034]: I0105 22:05:16.412126 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7fph" event={"ID":"067f49e5-426b-4da9-96a2-b1b4a8ebbde0","Type":"ContainerDied","Data":"46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb"} Jan 05 22:05:16 crc kubenswrapper[5034]: I0105 22:05:16.412457 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7fph" event={"ID":"067f49e5-426b-4da9-96a2-b1b4a8ebbde0","Type":"ContainerStarted","Data":"0814b7597b649256f1b6b83e09b87ecf08dbcd26645f48e2a1988a4ce748be54"} Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.424691 5034 generic.go:334] "Generic (PLEG): container finished" podID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerID="ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a" exitCode=0 Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.424788 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7fph" event={"ID":"067f49e5-426b-4da9-96a2-b1b4a8ebbde0","Type":"ContainerDied","Data":"ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a"} Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.430686 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-65khr" event={"ID":"d6e6c223-89b8-4e35-a4cf-1442486c98dd","Type":"ContainerStarted","Data":"eb9f7e841346ffa6bb279d9196b9082a3d3a8806a154a7606bc147d779ade725"} Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.432620 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" event={"ID":"9899044f-5e88-4777-b627-f7dcc60960a5","Type":"ContainerStarted","Data":"2c08c09d703d23d438b49911f08ee43e934dab389cfae60c9ce08a202b661a49"} Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.434551 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7cn7q" event={"ID":"04881f29-9640-4518-bf55-f893f4f27c26","Type":"ContainerStarted","Data":"9c765dc5bc06084cadde8a077b927932632ef8ba15f5098b6d1c1d0ad0b99dcd"} Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.434659 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.435984 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" event={"ID":"f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb","Type":"ContainerStarted","Data":"641423c5bdf8d06c6c063f13052b70a7fa97a52d1a16e8721f0ddbb07b6ff7d9"} Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.436127 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.461000 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-f97xv" podStartSLOduration=2.471377574 podStartE2EDuration="5.460976736s" podCreationTimestamp="2026-01-05 22:05:13 +0000 UTC" firstStartedPulling="2026-01-05 22:05:14.634956379 +0000 UTC m=+807.006955818" lastFinishedPulling="2026-01-05 22:05:17.624555551 +0000 UTC m=+809.996554980" observedRunningTime="2026-01-05 22:05:18.457809226 +0000 UTC m=+810.829808695" watchObservedRunningTime="2026-01-05 22:05:18.460976736 +0000 UTC m=+810.832976175" Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.493734 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" podStartSLOduration=1.911618747 podStartE2EDuration="5.493706536s" podCreationTimestamp="2026-01-05 22:05:13 +0000 UTC" firstStartedPulling="2026-01-05 22:05:14.05049364 +0000 UTC m=+806.422493079" lastFinishedPulling="2026-01-05 22:05:17.632581429 +0000 UTC m=+810.004580868" observedRunningTime="2026-01-05 22:05:18.492847521 +0000 UTC m=+810.864846970" watchObservedRunningTime="2026-01-05 22:05:18.493706536 +0000 UTC m=+810.865705975" Jan 05 22:05:18 crc kubenswrapper[5034]: I0105 22:05:18.493960 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7cn7q" podStartSLOduration=1.624982636 podStartE2EDuration="5.493955803s" podCreationTimestamp="2026-01-05 22:05:13 +0000 UTC" firstStartedPulling="2026-01-05 22:05:13.772601757 +0000 UTC m=+806.144601196" lastFinishedPulling="2026-01-05 22:05:17.641574924 +0000 UTC m=+810.013574363" observedRunningTime="2026-01-05 22:05:18.475407636 +0000 UTC m=+810.847407065" watchObservedRunningTime="2026-01-05 22:05:18.493955803 +0000 UTC m=+810.865955242" Jan 05 22:05:19 crc kubenswrapper[5034]: I0105 22:05:19.443214 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7fph" event={"ID":"067f49e5-426b-4da9-96a2-b1b4a8ebbde0","Type":"ContainerStarted","Data":"4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1"} Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.042063 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g7fph" podStartSLOduration=4.1884985 podStartE2EDuration="6.042041514s" podCreationTimestamp="2026-01-05 22:05:15 +0000 UTC" firstStartedPulling="2026-01-05 22:05:17.043396685 +0000 UTC m=+809.415396124" lastFinishedPulling="2026-01-05 22:05:18.896939699 +0000 UTC m=+811.268939138" observedRunningTime="2026-01-05 22:05:19.470835399 +0000 UTC m=+811.842834828" watchObservedRunningTime="2026-01-05 22:05:21.042041514 +0000 UTC m=+813.414040953" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.043692 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n9947"] Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.045014 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.060039 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n9947"] Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.107565 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m749g\" (UniqueName: \"kubernetes.io/projected/e79302d3-4753-48b9-ac71-8e8e3570fe78-kube-api-access-m749g\") pod \"certified-operators-n9947\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.107672 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-catalog-content\") pod \"certified-operators-n9947\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.107698 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-utilities\") pod \"certified-operators-n9947\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.210052 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-catalog-content\") pod \"certified-operators-n9947\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.210105 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-utilities\") pod \"certified-operators-n9947\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.210132 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m749g\" (UniqueName: \"kubernetes.io/projected/e79302d3-4753-48b9-ac71-8e8e3570fe78-kube-api-access-m749g\") pod \"certified-operators-n9947\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.210455 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-catalog-content\") pod \"certified-operators-n9947\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.210560 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-utilities\") pod \"certified-operators-n9947\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.229783 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m749g\" (UniqueName: \"kubernetes.io/projected/e79302d3-4753-48b9-ac71-8e8e3570fe78-kube-api-access-m749g\") pod \"certified-operators-n9947\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.360684 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.459366 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-65khr" event={"ID":"d6e6c223-89b8-4e35-a4cf-1442486c98dd","Type":"ContainerStarted","Data":"7d825691db2983124f8b352a4b88b3f3f894af6c0e0de8c61ee8bb6cb031001c"} Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.477887 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-65khr" podStartSLOduration=1.755911795 podStartE2EDuration="8.477866052s" podCreationTimestamp="2026-01-05 22:05:13 +0000 UTC" firstStartedPulling="2026-01-05 22:05:14.010481064 +0000 UTC m=+806.382480503" lastFinishedPulling="2026-01-05 22:05:20.732435321 +0000 UTC m=+813.104434760" observedRunningTime="2026-01-05 22:05:21.475744192 +0000 UTC m=+813.847743631" watchObservedRunningTime="2026-01-05 22:05:21.477866052 +0000 UTC m=+813.849865491" Jan 05 22:05:21 crc kubenswrapper[5034]: I0105 22:05:21.615878 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n9947"] Jan 05 22:05:22 crc kubenswrapper[5034]: I0105 22:05:22.465707 5034 generic.go:334] "Generic (PLEG): container finished" podID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerID="681acc042e5a9e2fbb609d73ef2ec8726ffc905dc6aec9378f7959607d8535d7" exitCode=0 Jan 05 22:05:22 crc kubenswrapper[5034]: I0105 22:05:22.465779 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9947" event={"ID":"e79302d3-4753-48b9-ac71-8e8e3570fe78","Type":"ContainerDied","Data":"681acc042e5a9e2fbb609d73ef2ec8726ffc905dc6aec9378f7959607d8535d7"} Jan 05 22:05:22 crc kubenswrapper[5034]: I0105 22:05:22.465819 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9947" event={"ID":"e79302d3-4753-48b9-ac71-8e8e3570fe78","Type":"ContainerStarted","Data":"5bf858bc03f4c75a705ad1bb55ae103b98c644591e627b043ab7b512c3d8b47b"} Jan 05 22:05:23 crc kubenswrapper[5034]: I0105 22:05:23.740445 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7cn7q" Jan 05 22:05:24 crc kubenswrapper[5034]: I0105 22:05:24.043932 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:24 crc kubenswrapper[5034]: I0105 22:05:24.044366 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:24 crc kubenswrapper[5034]: I0105 22:05:24.049390 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:24 crc kubenswrapper[5034]: I0105 22:05:24.479072 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9947" event={"ID":"e79302d3-4753-48b9-ac71-8e8e3570fe78","Type":"ContainerStarted","Data":"cddc634d76fe52a70f0de5b45940f83e11cd76a1f1fa0689322c81f994bbf791"} Jan 05 22:05:24 crc kubenswrapper[5034]: I0105 22:05:24.482447 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69866dbfb5-9mzp5" Jan 05 22:05:24 crc kubenswrapper[5034]: I0105 22:05:24.535233 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8wssg"] Jan 05 22:05:25 crc kubenswrapper[5034]: I0105 22:05:25.486476 5034 generic.go:334] "Generic (PLEG): container finished" podID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerID="cddc634d76fe52a70f0de5b45940f83e11cd76a1f1fa0689322c81f994bbf791" exitCode=0 Jan 05 22:05:25 crc kubenswrapper[5034]: I0105 22:05:25.486560 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9947" event={"ID":"e79302d3-4753-48b9-ac71-8e8e3570fe78","Type":"ContainerDied","Data":"cddc634d76fe52a70f0de5b45940f83e11cd76a1f1fa0689322c81f994bbf791"} Jan 05 22:05:25 crc kubenswrapper[5034]: I0105 22:05:25.583805 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:25 crc kubenswrapper[5034]: I0105 22:05:25.583879 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:25 crc kubenswrapper[5034]: I0105 22:05:25.627713 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:26 crc kubenswrapper[5034]: I0105 22:05:26.494527 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9947" event={"ID":"e79302d3-4753-48b9-ac71-8e8e3570fe78","Type":"ContainerStarted","Data":"32386c1f0c661aeea2e1cdfcf1e9edc5d2b0a543e2ff59c1e4582d2770528188"} Jan 05 22:05:26 crc kubenswrapper[5034]: I0105 22:05:26.514492 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n9947" podStartSLOduration=1.8291798620000002 podStartE2EDuration="5.514475823s" podCreationTimestamp="2026-01-05 22:05:21 +0000 UTC" firstStartedPulling="2026-01-05 22:05:22.467373906 +0000 UTC m=+814.839373345" lastFinishedPulling="2026-01-05 22:05:26.152669867 +0000 UTC m=+818.524669306" observedRunningTime="2026-01-05 22:05:26.511694584 +0000 UTC m=+818.883694023" watchObservedRunningTime="2026-01-05 22:05:26.514475823 +0000 UTC m=+818.886475262" Jan 05 22:05:26 crc kubenswrapper[5034]: I0105 22:05:26.563062 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:27 crc kubenswrapper[5034]: I0105 22:05:27.834654 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7fph"] Jan 05 22:05:28 crc kubenswrapper[5034]: I0105 22:05:28.504375 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g7fph" podUID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerName="registry-server" containerID="cri-o://4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1" gracePeriod=2 Jan 05 22:05:28 crc kubenswrapper[5034]: I0105 22:05:28.855254 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:28 crc kubenswrapper[5034]: I0105 22:05:28.903598 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-utilities\") pod \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " Jan 05 22:05:28 crc kubenswrapper[5034]: I0105 22:05:28.903673 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lct4t\" (UniqueName: \"kubernetes.io/projected/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-kube-api-access-lct4t\") pod \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " Jan 05 22:05:28 crc kubenswrapper[5034]: I0105 22:05:28.903739 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-catalog-content\") pod \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\" (UID: \"067f49e5-426b-4da9-96a2-b1b4a8ebbde0\") " Jan 05 22:05:28 crc kubenswrapper[5034]: I0105 22:05:28.905896 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-utilities" (OuterVolumeSpecName: "utilities") pod "067f49e5-426b-4da9-96a2-b1b4a8ebbde0" (UID: "067f49e5-426b-4da9-96a2-b1b4a8ebbde0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:05:28 crc kubenswrapper[5034]: I0105 22:05:28.927104 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "067f49e5-426b-4da9-96a2-b1b4a8ebbde0" (UID: "067f49e5-426b-4da9-96a2-b1b4a8ebbde0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:05:28 crc kubenswrapper[5034]: I0105 22:05:28.937357 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-kube-api-access-lct4t" (OuterVolumeSpecName: "kube-api-access-lct4t") pod "067f49e5-426b-4da9-96a2-b1b4a8ebbde0" (UID: "067f49e5-426b-4da9-96a2-b1b4a8ebbde0"). InnerVolumeSpecName "kube-api-access-lct4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.005281 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.005313 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lct4t\" (UniqueName: \"kubernetes.io/projected/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-kube-api-access-lct4t\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.005323 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067f49e5-426b-4da9-96a2-b1b4a8ebbde0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.513662 5034 generic.go:334] "Generic (PLEG): container finished" podID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerID="4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1" exitCode=0 Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.513719 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7fph" event={"ID":"067f49e5-426b-4da9-96a2-b1b4a8ebbde0","Type":"ContainerDied","Data":"4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1"} Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.514206 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7fph" event={"ID":"067f49e5-426b-4da9-96a2-b1b4a8ebbde0","Type":"ContainerDied","Data":"0814b7597b649256f1b6b83e09b87ecf08dbcd26645f48e2a1988a4ce748be54"} Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.513808 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7fph" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.514238 5034 scope.go:117] "RemoveContainer" containerID="4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.531623 5034 scope.go:117] "RemoveContainer" containerID="ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.556179 5034 scope.go:117] "RemoveContainer" containerID="46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.556825 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7fph"] Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.560950 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7fph"] Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.574137 5034 scope.go:117] "RemoveContainer" containerID="4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1" Jan 05 22:05:29 crc kubenswrapper[5034]: E0105 22:05:29.574635 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1\": container with ID starting with 4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1 not found: ID does not exist" containerID="4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.574666 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1"} err="failed to get container status \"4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1\": rpc error: code = NotFound desc = could not find container \"4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1\": container with ID starting with 4fb6241e2fa917cdc154a05386b4bf82fb80a482e8af8468f6d4fc5b7442c9e1 not found: ID does not exist" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.574687 5034 scope.go:117] "RemoveContainer" containerID="ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a" Jan 05 22:05:29 crc kubenswrapper[5034]: E0105 22:05:29.575044 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a\": container with ID starting with ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a not found: ID does not exist" containerID="ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.575125 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a"} err="failed to get container status \"ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a\": rpc error: code = NotFound desc = could not find container \"ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a\": container with ID starting with ef4a9a13726f6e37cd925459c1d7486fff09a1dd0e838923ee23606f5c992e4a not found: ID does not exist" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.575173 5034 scope.go:117] "RemoveContainer" containerID="46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb" Jan 05 22:05:29 crc kubenswrapper[5034]: E0105 22:05:29.575528 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb\": container with ID starting with 46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb not found: ID does not exist" containerID="46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.575560 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb"} err="failed to get container status \"46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb\": rpc error: code = NotFound desc = could not find container \"46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb\": container with ID starting with 46544cb337049fa2d42c2e553153e98de4e59a9b419a923d0b68ed5a3cfdd6eb not found: ID does not exist" Jan 05 22:05:29 crc kubenswrapper[5034]: I0105 22:05:29.851063 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" path="/var/lib/kubelet/pods/067f49e5-426b-4da9-96a2-b1b4a8ebbde0/volumes" Jan 05 22:05:31 crc kubenswrapper[5034]: I0105 22:05:31.362891 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:31 crc kubenswrapper[5034]: I0105 22:05:31.362972 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:31 crc kubenswrapper[5034]: I0105 22:05:31.398419 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:31 crc kubenswrapper[5034]: I0105 22:05:31.566669 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:32 crc kubenswrapper[5034]: I0105 22:05:32.437058 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n9947"] Jan 05 22:05:33 crc kubenswrapper[5034]: I0105 22:05:33.537427 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n9947" podUID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerName="registry-server" containerID="cri-o://32386c1f0c661aeea2e1cdfcf1e9edc5d2b0a543e2ff59c1e4582d2770528188" gracePeriod=2 Jan 05 22:05:33 crc kubenswrapper[5034]: I0105 22:05:33.707717 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bw8c5" Jan 05 22:05:34 crc kubenswrapper[5034]: I0105 22:05:34.545110 5034 generic.go:334] "Generic (PLEG): container finished" podID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerID="32386c1f0c661aeea2e1cdfcf1e9edc5d2b0a543e2ff59c1e4582d2770528188" exitCode=0 Jan 05 22:05:34 crc kubenswrapper[5034]: I0105 22:05:34.545125 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9947" event={"ID":"e79302d3-4753-48b9-ac71-8e8e3570fe78","Type":"ContainerDied","Data":"32386c1f0c661aeea2e1cdfcf1e9edc5d2b0a543e2ff59c1e4582d2770528188"} Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.031128 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.207072 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-utilities\") pod \"e79302d3-4753-48b9-ac71-8e8e3570fe78\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.207140 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m749g\" (UniqueName: \"kubernetes.io/projected/e79302d3-4753-48b9-ac71-8e8e3570fe78-kube-api-access-m749g\") pod \"e79302d3-4753-48b9-ac71-8e8e3570fe78\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.207175 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-catalog-content\") pod \"e79302d3-4753-48b9-ac71-8e8e3570fe78\" (UID: \"e79302d3-4753-48b9-ac71-8e8e3570fe78\") " Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.208168 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-utilities" (OuterVolumeSpecName: "utilities") pod "e79302d3-4753-48b9-ac71-8e8e3570fe78" (UID: "e79302d3-4753-48b9-ac71-8e8e3570fe78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.212410 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79302d3-4753-48b9-ac71-8e8e3570fe78-kube-api-access-m749g" (OuterVolumeSpecName: "kube-api-access-m749g") pod "e79302d3-4753-48b9-ac71-8e8e3570fe78" (UID: "e79302d3-4753-48b9-ac71-8e8e3570fe78"). InnerVolumeSpecName "kube-api-access-m749g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.262164 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e79302d3-4753-48b9-ac71-8e8e3570fe78" (UID: "e79302d3-4753-48b9-ac71-8e8e3570fe78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.308829 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.308882 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m749g\" (UniqueName: \"kubernetes.io/projected/e79302d3-4753-48b9-ac71-8e8e3570fe78-kube-api-access-m749g\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.308896 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e79302d3-4753-48b9-ac71-8e8e3570fe78-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.552488 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9947" event={"ID":"e79302d3-4753-48b9-ac71-8e8e3570fe78","Type":"ContainerDied","Data":"5bf858bc03f4c75a705ad1bb55ae103b98c644591e627b043ab7b512c3d8b47b"} Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.552551 5034 scope.go:117] "RemoveContainer" containerID="32386c1f0c661aeea2e1cdfcf1e9edc5d2b0a543e2ff59c1e4582d2770528188" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.552572 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9947" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.568421 5034 scope.go:117] "RemoveContainer" containerID="cddc634d76fe52a70f0de5b45940f83e11cd76a1f1fa0689322c81f994bbf791" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.580851 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n9947"] Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.584920 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n9947"] Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.610164 5034 scope.go:117] "RemoveContainer" containerID="681acc042e5a9e2fbb609d73ef2ec8726ffc905dc6aec9378f7959607d8535d7" Jan 05 22:05:35 crc kubenswrapper[5034]: I0105 22:05:35.854029 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79302d3-4753-48b9-ac71-8e8e3570fe78" path="/var/lib/kubelet/pods/e79302d3-4753-48b9-ac71-8e8e3570fe78/volumes" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.914413 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7"] Jan 05 22:05:47 crc kubenswrapper[5034]: E0105 22:05:47.915160 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerName="registry-server" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.915173 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerName="registry-server" Jan 05 22:05:47 crc kubenswrapper[5034]: E0105 22:05:47.915224 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerName="extract-utilities" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.915233 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerName="extract-utilities" Jan 05 22:05:47 crc kubenswrapper[5034]: E0105 22:05:47.915354 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerName="extract-content" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.915365 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerName="extract-content" Jan 05 22:05:47 crc kubenswrapper[5034]: E0105 22:05:47.915375 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerName="extract-content" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.915382 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerName="extract-content" Jan 05 22:05:47 crc kubenswrapper[5034]: E0105 22:05:47.915413 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerName="registry-server" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.915421 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerName="registry-server" Jan 05 22:05:47 crc kubenswrapper[5034]: E0105 22:05:47.915433 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerName="extract-utilities" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.915440 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerName="extract-utilities" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.915809 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79302d3-4753-48b9-ac71-8e8e3570fe78" containerName="registry-server" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.915821 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="067f49e5-426b-4da9-96a2-b1b4a8ebbde0" containerName="registry-server" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.916688 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.920608 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 05 22:05:47 crc kubenswrapper[5034]: I0105 22:05:47.924418 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7"] Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.009179 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.009264 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wznh\" (UniqueName: \"kubernetes.io/projected/e8dba0a8-44e1-4a23-b14a-85826a656669-kube-api-access-5wznh\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.009380 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.110611 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.110687 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wznh\" (UniqueName: \"kubernetes.io/projected/e8dba0a8-44e1-4a23-b14a-85826a656669-kube-api-access-5wznh\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.110722 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.111029 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.111053 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.128992 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wznh\" (UniqueName: \"kubernetes.io/projected/e8dba0a8-44e1-4a23-b14a-85826a656669-kube-api-access-5wznh\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.240600 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.439306 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7"] Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.645096 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" event={"ID":"e8dba0a8-44e1-4a23-b14a-85826a656669","Type":"ContainerStarted","Data":"bccd3772d5a2c5db895674ae02a79a1689e5499c89b6fb2a83b2744a30ca195c"} Jan 05 22:05:48 crc kubenswrapper[5034]: I0105 22:05:48.645142 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" event={"ID":"e8dba0a8-44e1-4a23-b14a-85826a656669","Type":"ContainerStarted","Data":"144583bdc6ed9302c7f04d843f30cce4c8786f22fa4935f1490c7e65adf9c236"} Jan 05 22:05:49 crc kubenswrapper[5034]: I0105 22:05:49.576297 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8wssg" podUID="d23b0bf5-8bd5-4891-b101-a278b984dbcf" containerName="console" containerID="cri-o://c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa" gracePeriod=15 Jan 05 22:05:49 crc kubenswrapper[5034]: I0105 22:05:49.652692 5034 generic.go:334] "Generic (PLEG): container finished" podID="e8dba0a8-44e1-4a23-b14a-85826a656669" containerID="bccd3772d5a2c5db895674ae02a79a1689e5499c89b6fb2a83b2744a30ca195c" exitCode=0 Jan 05 22:05:49 crc kubenswrapper[5034]: I0105 22:05:49.652731 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" event={"ID":"e8dba0a8-44e1-4a23-b14a-85826a656669","Type":"ContainerDied","Data":"bccd3772d5a2c5db895674ae02a79a1689e5499c89b6fb2a83b2744a30ca195c"} Jan 05 22:05:49 crc kubenswrapper[5034]: I0105 22:05:49.908826 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8wssg_d23b0bf5-8bd5-4891-b101-a278b984dbcf/console/0.log" Jan 05 22:05:49 crc kubenswrapper[5034]: I0105 22:05:49.908907 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8wssg" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.032693 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-service-ca\") pod \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.032758 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-oauth-config\") pod \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.032790 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-trusted-ca-bundle\") pod \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.032847 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-serving-cert\") pod \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.032913 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-config\") pod \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.032939 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rm2g\" (UniqueName: \"kubernetes.io/projected/d23b0bf5-8bd5-4891-b101-a278b984dbcf-kube-api-access-9rm2g\") pod \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.033092 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-oauth-serving-cert\") pod \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\" (UID: \"d23b0bf5-8bd5-4891-b101-a278b984dbcf\") " Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.033764 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d23b0bf5-8bd5-4891-b101-a278b984dbcf" (UID: "d23b0bf5-8bd5-4891-b101-a278b984dbcf"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.033860 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d23b0bf5-8bd5-4891-b101-a278b984dbcf" (UID: "d23b0bf5-8bd5-4891-b101-a278b984dbcf"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.033895 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-service-ca" (OuterVolumeSpecName: "service-ca") pod "d23b0bf5-8bd5-4891-b101-a278b984dbcf" (UID: "d23b0bf5-8bd5-4891-b101-a278b984dbcf"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.034413 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-config" (OuterVolumeSpecName: "console-config") pod "d23b0bf5-8bd5-4891-b101-a278b984dbcf" (UID: "d23b0bf5-8bd5-4891-b101-a278b984dbcf"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.038174 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d23b0bf5-8bd5-4891-b101-a278b984dbcf-kube-api-access-9rm2g" (OuterVolumeSpecName: "kube-api-access-9rm2g") pod "d23b0bf5-8bd5-4891-b101-a278b984dbcf" (UID: "d23b0bf5-8bd5-4891-b101-a278b984dbcf"). InnerVolumeSpecName "kube-api-access-9rm2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.038509 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d23b0bf5-8bd5-4891-b101-a278b984dbcf" (UID: "d23b0bf5-8bd5-4891-b101-a278b984dbcf"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.039070 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d23b0bf5-8bd5-4891-b101-a278b984dbcf" (UID: "d23b0bf5-8bd5-4891-b101-a278b984dbcf"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.134254 5034 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.134566 5034 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.134580 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rm2g\" (UniqueName: \"kubernetes.io/projected/d23b0bf5-8bd5-4891-b101-a278b984dbcf-kube-api-access-9rm2g\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.134593 5034 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.134602 5034 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.134610 5034 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d23b0bf5-8bd5-4891-b101-a278b984dbcf-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.134638 5034 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d23b0bf5-8bd5-4891-b101-a278b984dbcf-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.658966 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8wssg_d23b0bf5-8bd5-4891-b101-a278b984dbcf/console/0.log" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.659008 5034 generic.go:334] "Generic (PLEG): container finished" podID="d23b0bf5-8bd5-4891-b101-a278b984dbcf" containerID="c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa" exitCode=2 Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.659037 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8wssg" event={"ID":"d23b0bf5-8bd5-4891-b101-a278b984dbcf","Type":"ContainerDied","Data":"c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa"} Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.659068 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8wssg" event={"ID":"d23b0bf5-8bd5-4891-b101-a278b984dbcf","Type":"ContainerDied","Data":"aa5a7ab18628dd832b0cf718cd668b2b1b327a06ec226231b472493b13dfd9a0"} Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.659086 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8wssg" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.659097 5034 scope.go:117] "RemoveContainer" containerID="c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.676805 5034 scope.go:117] "RemoveContainer" containerID="c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa" Jan 05 22:05:50 crc kubenswrapper[5034]: E0105 22:05:50.677180 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa\": container with ID starting with c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa not found: ID does not exist" containerID="c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.677214 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa"} err="failed to get container status \"c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa\": rpc error: code = NotFound desc = could not find container \"c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa\": container with ID starting with c37f93a00c20a76416011c1d1e4ae67ca2025461604c8f466f338024418facfa not found: ID does not exist" Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.686127 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8wssg"] Jan 05 22:05:50 crc kubenswrapper[5034]: I0105 22:05:50.689588 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8wssg"] Jan 05 22:05:51 crc kubenswrapper[5034]: I0105 22:05:51.668526 5034 generic.go:334] "Generic (PLEG): container finished" podID="e8dba0a8-44e1-4a23-b14a-85826a656669" containerID="2c96feec95aa0b964d209435b2d47fa8cdb7cc435cd54b5d57a40949d437765f" exitCode=0 Jan 05 22:05:51 crc kubenswrapper[5034]: I0105 22:05:51.668581 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" event={"ID":"e8dba0a8-44e1-4a23-b14a-85826a656669","Type":"ContainerDied","Data":"2c96feec95aa0b964d209435b2d47fa8cdb7cc435cd54b5d57a40949d437765f"} Jan 05 22:05:51 crc kubenswrapper[5034]: I0105 22:05:51.845636 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d23b0bf5-8bd5-4891-b101-a278b984dbcf" path="/var/lib/kubelet/pods/d23b0bf5-8bd5-4891-b101-a278b984dbcf/volumes" Jan 05 22:05:52 crc kubenswrapper[5034]: I0105 22:05:52.676499 5034 generic.go:334] "Generic (PLEG): container finished" podID="e8dba0a8-44e1-4a23-b14a-85826a656669" containerID="a634c07f5a0024c8bda55464f740b80477dd221a8bb48e301a10a8e1b030a1d4" exitCode=0 Jan 05 22:05:52 crc kubenswrapper[5034]: I0105 22:05:52.676539 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" event={"ID":"e8dba0a8-44e1-4a23-b14a-85826a656669","Type":"ContainerDied","Data":"a634c07f5a0024c8bda55464f740b80477dd221a8bb48e301a10a8e1b030a1d4"} Jan 05 22:05:53 crc kubenswrapper[5034]: I0105 22:05:53.925041 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:05:53 crc kubenswrapper[5034]: I0105 22:05:53.985377 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-bundle\") pod \"e8dba0a8-44e1-4a23-b14a-85826a656669\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " Jan 05 22:05:53 crc kubenswrapper[5034]: I0105 22:05:53.985563 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wznh\" (UniqueName: \"kubernetes.io/projected/e8dba0a8-44e1-4a23-b14a-85826a656669-kube-api-access-5wznh\") pod \"e8dba0a8-44e1-4a23-b14a-85826a656669\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " Jan 05 22:05:53 crc kubenswrapper[5034]: I0105 22:05:53.985648 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-util\") pod \"e8dba0a8-44e1-4a23-b14a-85826a656669\" (UID: \"e8dba0a8-44e1-4a23-b14a-85826a656669\") " Jan 05 22:05:53 crc kubenswrapper[5034]: I0105 22:05:53.992564 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-bundle" (OuterVolumeSpecName: "bundle") pod "e8dba0a8-44e1-4a23-b14a-85826a656669" (UID: "e8dba0a8-44e1-4a23-b14a-85826a656669"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:05:53 crc kubenswrapper[5034]: I0105 22:05:53.995564 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8dba0a8-44e1-4a23-b14a-85826a656669-kube-api-access-5wznh" (OuterVolumeSpecName: "kube-api-access-5wznh") pod "e8dba0a8-44e1-4a23-b14a-85826a656669" (UID: "e8dba0a8-44e1-4a23-b14a-85826a656669"). InnerVolumeSpecName "kube-api-access-5wznh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:05:54 crc kubenswrapper[5034]: I0105 22:05:54.003156 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-util" (OuterVolumeSpecName: "util") pod "e8dba0a8-44e1-4a23-b14a-85826a656669" (UID: "e8dba0a8-44e1-4a23-b14a-85826a656669"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:05:54 crc kubenswrapper[5034]: I0105 22:05:54.087359 5034 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-util\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:54 crc kubenswrapper[5034]: I0105 22:05:54.087403 5034 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8dba0a8-44e1-4a23-b14a-85826a656669-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:54 crc kubenswrapper[5034]: I0105 22:05:54.087412 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wznh\" (UniqueName: \"kubernetes.io/projected/e8dba0a8-44e1-4a23-b14a-85826a656669-kube-api-access-5wznh\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:54 crc kubenswrapper[5034]: I0105 22:05:54.690507 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" event={"ID":"e8dba0a8-44e1-4a23-b14a-85826a656669","Type":"ContainerDied","Data":"144583bdc6ed9302c7f04d843f30cce4c8786f22fa4935f1490c7e65adf9c236"} Jan 05 22:05:54 crc kubenswrapper[5034]: I0105 22:05:54.690557 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="144583bdc6ed9302c7f04d843f30cce4c8786f22fa4935f1490c7e65adf9c236" Jan 05 22:05:54 crc kubenswrapper[5034]: I0105 22:05:54.690654 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.811566 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn"] Jan 05 22:06:02 crc kubenswrapper[5034]: E0105 22:06:02.812289 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8dba0a8-44e1-4a23-b14a-85826a656669" containerName="pull" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.812305 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8dba0a8-44e1-4a23-b14a-85826a656669" containerName="pull" Jan 05 22:06:02 crc kubenswrapper[5034]: E0105 22:06:02.812318 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8dba0a8-44e1-4a23-b14a-85826a656669" containerName="util" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.812326 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8dba0a8-44e1-4a23-b14a-85826a656669" containerName="util" Jan 05 22:06:02 crc kubenswrapper[5034]: E0105 22:06:02.812342 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8dba0a8-44e1-4a23-b14a-85826a656669" containerName="extract" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.812350 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8dba0a8-44e1-4a23-b14a-85826a656669" containerName="extract" Jan 05 22:06:02 crc kubenswrapper[5034]: E0105 22:06:02.812363 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23b0bf5-8bd5-4891-b101-a278b984dbcf" containerName="console" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.812371 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23b0bf5-8bd5-4891-b101-a278b984dbcf" containerName="console" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.812487 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8dba0a8-44e1-4a23-b14a-85826a656669" containerName="extract" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.812507 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="d23b0bf5-8bd5-4891-b101-a278b984dbcf" containerName="console" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.812969 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.816555 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.816958 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.817201 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dzplm" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.817391 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.817626 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.833943 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn"] Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.907901 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8200b031-48ef-4b6c-8ee2-bddf6e8cde98-apiservice-cert\") pod \"metallb-operator-controller-manager-59f99c4667-mtskn\" (UID: \"8200b031-48ef-4b6c-8ee2-bddf6e8cde98\") " pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.907975 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8200b031-48ef-4b6c-8ee2-bddf6e8cde98-webhook-cert\") pod \"metallb-operator-controller-manager-59f99c4667-mtskn\" (UID: \"8200b031-48ef-4b6c-8ee2-bddf6e8cde98\") " pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:02 crc kubenswrapper[5034]: I0105 22:06:02.908001 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slgjg\" (UniqueName: \"kubernetes.io/projected/8200b031-48ef-4b6c-8ee2-bddf6e8cde98-kube-api-access-slgjg\") pod \"metallb-operator-controller-manager-59f99c4667-mtskn\" (UID: \"8200b031-48ef-4b6c-8ee2-bddf6e8cde98\") " pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.009130 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8200b031-48ef-4b6c-8ee2-bddf6e8cde98-apiservice-cert\") pod \"metallb-operator-controller-manager-59f99c4667-mtskn\" (UID: \"8200b031-48ef-4b6c-8ee2-bddf6e8cde98\") " pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.009215 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8200b031-48ef-4b6c-8ee2-bddf6e8cde98-webhook-cert\") pod \"metallb-operator-controller-manager-59f99c4667-mtskn\" (UID: \"8200b031-48ef-4b6c-8ee2-bddf6e8cde98\") " pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.009235 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slgjg\" (UniqueName: \"kubernetes.io/projected/8200b031-48ef-4b6c-8ee2-bddf6e8cde98-kube-api-access-slgjg\") pod \"metallb-operator-controller-manager-59f99c4667-mtskn\" (UID: \"8200b031-48ef-4b6c-8ee2-bddf6e8cde98\") " pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.015328 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8200b031-48ef-4b6c-8ee2-bddf6e8cde98-apiservice-cert\") pod \"metallb-operator-controller-manager-59f99c4667-mtskn\" (UID: \"8200b031-48ef-4b6c-8ee2-bddf6e8cde98\") " pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.015707 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8200b031-48ef-4b6c-8ee2-bddf6e8cde98-webhook-cert\") pod \"metallb-operator-controller-manager-59f99c4667-mtskn\" (UID: \"8200b031-48ef-4b6c-8ee2-bddf6e8cde98\") " pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.027229 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slgjg\" (UniqueName: \"kubernetes.io/projected/8200b031-48ef-4b6c-8ee2-bddf6e8cde98-kube-api-access-slgjg\") pod \"metallb-operator-controller-manager-59f99c4667-mtskn\" (UID: \"8200b031-48ef-4b6c-8ee2-bddf6e8cde98\") " pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.084609 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj"] Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.085540 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:03 crc kubenswrapper[5034]: W0105 22:06:03.087052 5034 reflector.go:561] object-"metallb-system"/"controller-dockercfg-cwtsh": failed to list *v1.Secret: secrets "controller-dockercfg-cwtsh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 05 22:06:03 crc kubenswrapper[5034]: E0105 22:06:03.087115 5034 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-cwtsh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"controller-dockercfg-cwtsh\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.087482 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.087583 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.105007 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj"] Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.131607 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.211993 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cxng\" (UniqueName: \"kubernetes.io/projected/f76a0977-f500-4b92-8eee-304a3c7385ad-kube-api-access-7cxng\") pod \"metallb-operator-webhook-server-6d7c77b87b-md8xj\" (UID: \"f76a0977-f500-4b92-8eee-304a3c7385ad\") " pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.212395 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f76a0977-f500-4b92-8eee-304a3c7385ad-apiservice-cert\") pod \"metallb-operator-webhook-server-6d7c77b87b-md8xj\" (UID: \"f76a0977-f500-4b92-8eee-304a3c7385ad\") " pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.212459 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f76a0977-f500-4b92-8eee-304a3c7385ad-webhook-cert\") pod \"metallb-operator-webhook-server-6d7c77b87b-md8xj\" (UID: \"f76a0977-f500-4b92-8eee-304a3c7385ad\") " pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.317493 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f76a0977-f500-4b92-8eee-304a3c7385ad-apiservice-cert\") pod \"metallb-operator-webhook-server-6d7c77b87b-md8xj\" (UID: \"f76a0977-f500-4b92-8eee-304a3c7385ad\") " pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.317564 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f76a0977-f500-4b92-8eee-304a3c7385ad-webhook-cert\") pod \"metallb-operator-webhook-server-6d7c77b87b-md8xj\" (UID: \"f76a0977-f500-4b92-8eee-304a3c7385ad\") " pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.317604 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cxng\" (UniqueName: \"kubernetes.io/projected/f76a0977-f500-4b92-8eee-304a3c7385ad-kube-api-access-7cxng\") pod \"metallb-operator-webhook-server-6d7c77b87b-md8xj\" (UID: \"f76a0977-f500-4b92-8eee-304a3c7385ad\") " pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.328483 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f76a0977-f500-4b92-8eee-304a3c7385ad-apiservice-cert\") pod \"metallb-operator-webhook-server-6d7c77b87b-md8xj\" (UID: \"f76a0977-f500-4b92-8eee-304a3c7385ad\") " pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.338788 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f76a0977-f500-4b92-8eee-304a3c7385ad-webhook-cert\") pod \"metallb-operator-webhook-server-6d7c77b87b-md8xj\" (UID: \"f76a0977-f500-4b92-8eee-304a3c7385ad\") " pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.357803 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cxng\" (UniqueName: \"kubernetes.io/projected/f76a0977-f500-4b92-8eee-304a3c7385ad-kube-api-access-7cxng\") pod \"metallb-operator-webhook-server-6d7c77b87b-md8xj\" (UID: \"f76a0977-f500-4b92-8eee-304a3c7385ad\") " pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.465221 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn"] Jan 05 22:06:03 crc kubenswrapper[5034]: W0105 22:06:03.475567 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8200b031_48ef_4b6c_8ee2_bddf6e8cde98.slice/crio-221e74837bd1f2820f406f0ecca458e24fc1fbe69b5005f30cfeb0c538653ec0 WatchSource:0}: Error finding container 221e74837bd1f2820f406f0ecca458e24fc1fbe69b5005f30cfeb0c538653ec0: Status 404 returned error can't find the container with id 221e74837bd1f2820f406f0ecca458e24fc1fbe69b5005f30cfeb0c538653ec0 Jan 05 22:06:03 crc kubenswrapper[5034]: I0105 22:06:03.737826 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" event={"ID":"8200b031-48ef-4b6c-8ee2-bddf6e8cde98","Type":"ContainerStarted","Data":"221e74837bd1f2820f406f0ecca458e24fc1fbe69b5005f30cfeb0c538653ec0"} Jan 05 22:06:04 crc kubenswrapper[5034]: I0105 22:06:04.184040 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-cwtsh" Jan 05 22:06:04 crc kubenswrapper[5034]: I0105 22:06:04.185929 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:04 crc kubenswrapper[5034]: I0105 22:06:04.389782 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj"] Jan 05 22:06:04 crc kubenswrapper[5034]: W0105 22:06:04.402139 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf76a0977_f500_4b92_8eee_304a3c7385ad.slice/crio-3700f5ca6702e65b85cd7620b93e736b1ea77815a394c1ca84cd4dc5e023f646 WatchSource:0}: Error finding container 3700f5ca6702e65b85cd7620b93e736b1ea77815a394c1ca84cd4dc5e023f646: Status 404 returned error can't find the container with id 3700f5ca6702e65b85cd7620b93e736b1ea77815a394c1ca84cd4dc5e023f646 Jan 05 22:06:04 crc kubenswrapper[5034]: I0105 22:06:04.742423 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" event={"ID":"f76a0977-f500-4b92-8eee-304a3c7385ad","Type":"ContainerStarted","Data":"3700f5ca6702e65b85cd7620b93e736b1ea77815a394c1ca84cd4dc5e023f646"} Jan 05 22:06:07 crc kubenswrapper[5034]: I0105 22:06:07.776577 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" event={"ID":"8200b031-48ef-4b6c-8ee2-bddf6e8cde98","Type":"ContainerStarted","Data":"413c98cb41e7a19e864be42ad25c948e1b21db19ae0522d57a7e006e8dad5ec6"} Jan 05 22:06:07 crc kubenswrapper[5034]: I0105 22:06:07.779168 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:07 crc kubenswrapper[5034]: I0105 22:06:07.806691 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" podStartSLOduration=2.0176926059999998 podStartE2EDuration="5.806672305s" podCreationTimestamp="2026-01-05 22:06:02 +0000 UTC" firstStartedPulling="2026-01-05 22:06:03.49075062 +0000 UTC m=+855.862750059" lastFinishedPulling="2026-01-05 22:06:07.279730319 +0000 UTC m=+859.651729758" observedRunningTime="2026-01-05 22:06:07.805619595 +0000 UTC m=+860.177619054" watchObservedRunningTime="2026-01-05 22:06:07.806672305 +0000 UTC m=+860.178671744" Jan 05 22:06:10 crc kubenswrapper[5034]: I0105 22:06:10.802759 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" event={"ID":"f76a0977-f500-4b92-8eee-304a3c7385ad","Type":"ContainerStarted","Data":"da97c96677aa801f2327a26debc857f62d7770d8c1a8dc43174c58607e89f64c"} Jan 05 22:06:10 crc kubenswrapper[5034]: I0105 22:06:10.803282 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:10 crc kubenswrapper[5034]: I0105 22:06:10.824761 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" podStartSLOduration=2.352608468 podStartE2EDuration="7.82474438s" podCreationTimestamp="2026-01-05 22:06:03 +0000 UTC" firstStartedPulling="2026-01-05 22:06:04.405856719 +0000 UTC m=+856.777856158" lastFinishedPulling="2026-01-05 22:06:09.877992631 +0000 UTC m=+862.249992070" observedRunningTime="2026-01-05 22:06:10.821338903 +0000 UTC m=+863.193338332" watchObservedRunningTime="2026-01-05 22:06:10.82474438 +0000 UTC m=+863.196743819" Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.753504 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ffvx9"] Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.755488 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.761813 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffvx9"] Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.864155 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-catalog-content\") pod \"community-operators-ffvx9\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.864452 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwfn\" (UniqueName: \"kubernetes.io/projected/49447a07-ff32-41ec-b7cc-e21cd5d4a002-kube-api-access-ckwfn\") pod \"community-operators-ffvx9\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.864553 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-utilities\") pod \"community-operators-ffvx9\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.965825 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-catalog-content\") pod \"community-operators-ffvx9\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.966164 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckwfn\" (UniqueName: \"kubernetes.io/projected/49447a07-ff32-41ec-b7cc-e21cd5d4a002-kube-api-access-ckwfn\") pod \"community-operators-ffvx9\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.966314 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-utilities\") pod \"community-operators-ffvx9\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.966490 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-catalog-content\") pod \"community-operators-ffvx9\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.966699 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-utilities\") pod \"community-operators-ffvx9\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:20 crc kubenswrapper[5034]: I0105 22:06:20.984609 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckwfn\" (UniqueName: \"kubernetes.io/projected/49447a07-ff32-41ec-b7cc-e21cd5d4a002-kube-api-access-ckwfn\") pod \"community-operators-ffvx9\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:21 crc kubenswrapper[5034]: I0105 22:06:21.080985 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:21 crc kubenswrapper[5034]: I0105 22:06:21.421229 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffvx9"] Jan 05 22:06:21 crc kubenswrapper[5034]: I0105 22:06:21.856189 5034 generic.go:334] "Generic (PLEG): container finished" podID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerID="185892ca331a191bb2a72275ea9daa4ee27779a5397d4784c0dcc2449fab6efe" exitCode=0 Jan 05 22:06:21 crc kubenswrapper[5034]: I0105 22:06:21.856225 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffvx9" event={"ID":"49447a07-ff32-41ec-b7cc-e21cd5d4a002","Type":"ContainerDied","Data":"185892ca331a191bb2a72275ea9daa4ee27779a5397d4784c0dcc2449fab6efe"} Jan 05 22:06:21 crc kubenswrapper[5034]: I0105 22:06:21.856248 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffvx9" event={"ID":"49447a07-ff32-41ec-b7cc-e21cd5d4a002","Type":"ContainerStarted","Data":"ca857e30519f82034870b84d576a8a67e35af49c2129b58c03c34c8f20d84c41"} Jan 05 22:06:22 crc kubenswrapper[5034]: I0105 22:06:22.862518 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffvx9" event={"ID":"49447a07-ff32-41ec-b7cc-e21cd5d4a002","Type":"ContainerStarted","Data":"44c3374e631367ea352773ab40829c7373af3348821321ca3e0639ec47ea9be3"} Jan 05 22:06:23 crc kubenswrapper[5034]: I0105 22:06:23.868183 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffvx9" event={"ID":"49447a07-ff32-41ec-b7cc-e21cd5d4a002","Type":"ContainerDied","Data":"44c3374e631367ea352773ab40829c7373af3348821321ca3e0639ec47ea9be3"} Jan 05 22:06:23 crc kubenswrapper[5034]: I0105 22:06:23.868088 5034 generic.go:334] "Generic (PLEG): container finished" podID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerID="44c3374e631367ea352773ab40829c7373af3348821321ca3e0639ec47ea9be3" exitCode=0 Jan 05 22:06:24 crc kubenswrapper[5034]: I0105 22:06:24.190846 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6d7c77b87b-md8xj" Jan 05 22:06:24 crc kubenswrapper[5034]: I0105 22:06:24.896069 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffvx9" event={"ID":"49447a07-ff32-41ec-b7cc-e21cd5d4a002","Type":"ContainerStarted","Data":"228ca829347f8d054f50ea564b933625fd16284d51ad90f329a3540ee7eeac8b"} Jan 05 22:06:31 crc kubenswrapper[5034]: I0105 22:06:31.081114 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:31 crc kubenswrapper[5034]: I0105 22:06:31.082611 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:31 crc kubenswrapper[5034]: I0105 22:06:31.131244 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:31 crc kubenswrapper[5034]: I0105 22:06:31.151314 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ffvx9" podStartSLOduration=8.575560993 podStartE2EDuration="11.151296655s" podCreationTimestamp="2026-01-05 22:06:20 +0000 UTC" firstStartedPulling="2026-01-05 22:06:21.858779683 +0000 UTC m=+874.230779122" lastFinishedPulling="2026-01-05 22:06:24.434515355 +0000 UTC m=+876.806514784" observedRunningTime="2026-01-05 22:06:24.912306104 +0000 UTC m=+877.284305553" watchObservedRunningTime="2026-01-05 22:06:31.151296655 +0000 UTC m=+883.523296094" Jan 05 22:06:31 crc kubenswrapper[5034]: I0105 22:06:31.974316 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:32 crc kubenswrapper[5034]: I0105 22:06:32.022627 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffvx9"] Jan 05 22:06:33 crc kubenswrapper[5034]: I0105 22:06:33.942942 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ffvx9" podUID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerName="registry-server" containerID="cri-o://228ca829347f8d054f50ea564b933625fd16284d51ad90f329a3540ee7eeac8b" gracePeriod=2 Jan 05 22:06:34 crc kubenswrapper[5034]: I0105 22:06:34.951456 5034 generic.go:334] "Generic (PLEG): container finished" podID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerID="228ca829347f8d054f50ea564b933625fd16284d51ad90f329a3540ee7eeac8b" exitCode=0 Jan 05 22:06:34 crc kubenswrapper[5034]: I0105 22:06:34.951537 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffvx9" event={"ID":"49447a07-ff32-41ec-b7cc-e21cd5d4a002","Type":"ContainerDied","Data":"228ca829347f8d054f50ea564b933625fd16284d51ad90f329a3540ee7eeac8b"} Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.421921 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.566974 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-utilities\") pod \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.567032 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckwfn\" (UniqueName: \"kubernetes.io/projected/49447a07-ff32-41ec-b7cc-e21cd5d4a002-kube-api-access-ckwfn\") pod \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.567117 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-catalog-content\") pod \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\" (UID: \"49447a07-ff32-41ec-b7cc-e21cd5d4a002\") " Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.567793 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-utilities" (OuterVolumeSpecName: "utilities") pod "49447a07-ff32-41ec-b7cc-e21cd5d4a002" (UID: "49447a07-ff32-41ec-b7cc-e21cd5d4a002"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.571895 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49447a07-ff32-41ec-b7cc-e21cd5d4a002-kube-api-access-ckwfn" (OuterVolumeSpecName: "kube-api-access-ckwfn") pod "49447a07-ff32-41ec-b7cc-e21cd5d4a002" (UID: "49447a07-ff32-41ec-b7cc-e21cd5d4a002"). InnerVolumeSpecName "kube-api-access-ckwfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.614789 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49447a07-ff32-41ec-b7cc-e21cd5d4a002" (UID: "49447a07-ff32-41ec-b7cc-e21cd5d4a002"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.668277 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.668320 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckwfn\" (UniqueName: \"kubernetes.io/projected/49447a07-ff32-41ec-b7cc-e21cd5d4a002-kube-api-access-ckwfn\") on node \"crc\" DevicePath \"\"" Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.668330 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49447a07-ff32-41ec-b7cc-e21cd5d4a002-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.961762 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffvx9" event={"ID":"49447a07-ff32-41ec-b7cc-e21cd5d4a002","Type":"ContainerDied","Data":"ca857e30519f82034870b84d576a8a67e35af49c2129b58c03c34c8f20d84c41"} Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.961821 5034 scope.go:117] "RemoveContainer" containerID="228ca829347f8d054f50ea564b933625fd16284d51ad90f329a3540ee7eeac8b" Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.961841 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffvx9" Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.987936 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffvx9"] Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.992432 5034 scope.go:117] "RemoveContainer" containerID="44c3374e631367ea352773ab40829c7373af3348821321ca3e0639ec47ea9be3" Jan 05 22:06:35 crc kubenswrapper[5034]: I0105 22:06:35.996274 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ffvx9"] Jan 05 22:06:36 crc kubenswrapper[5034]: I0105 22:06:36.012521 5034 scope.go:117] "RemoveContainer" containerID="185892ca331a191bb2a72275ea9daa4ee27779a5397d4784c0dcc2449fab6efe" Jan 05 22:06:37 crc kubenswrapper[5034]: I0105 22:06:37.846616 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" path="/var/lib/kubelet/pods/49447a07-ff32-41ec-b7cc-e21cd5d4a002/volumes" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.134937 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-59f99c4667-mtskn" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.892300 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4hzvh"] Jan 05 22:06:43 crc kubenswrapper[5034]: E0105 22:06:43.892758 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerName="extract-utilities" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.892790 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerName="extract-utilities" Jan 05 22:06:43 crc kubenswrapper[5034]: E0105 22:06:43.892812 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerName="extract-content" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.892828 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerName="extract-content" Jan 05 22:06:43 crc kubenswrapper[5034]: E0105 22:06:43.892855 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerName="registry-server" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.892869 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerName="registry-server" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.893104 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="49447a07-ff32-41ec-b7cc-e21cd5d4a002" containerName="registry-server" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.896466 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt"] Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.897568 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.898452 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.900266 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.900277 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.901123 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hb6tp" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.901459 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 05 22:06:43 crc kubenswrapper[5034]: I0105 22:06:43.921446 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt"] Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.031236 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-k7tnh"] Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.032750 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.037406 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.037777 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.038102 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.038505 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6dwmg" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.051247 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-nn49h"] Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.052497 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.055164 5034 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.074742 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-nn49h"] Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.092209 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c30e9a6-f8a5-471b-a98a-6488b00be9b3-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-5cxjt\" (UID: \"6c30e9a6-f8a5-471b-a98a-6488b00be9b3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.092496 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-frr-sockets\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.092575 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrrkc\" (UniqueName: \"kubernetes.io/projected/6c30e9a6-f8a5-471b-a98a-6488b00be9b3-kube-api-access-qrrkc\") pod \"frr-k8s-webhook-server-7784b6fcf-5cxjt\" (UID: \"6c30e9a6-f8a5-471b-a98a-6488b00be9b3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.092670 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/88af2613-0081-477e-983f-1d8a7a35f282-frr-startup\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.092749 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hng2\" (UniqueName: \"kubernetes.io/projected/88af2613-0081-477e-983f-1d8a7a35f282-kube-api-access-6hng2\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.092835 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-reloader\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.092924 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88af2613-0081-477e-983f-1d8a7a35f282-metrics-certs\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.092996 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-frr-conf\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.093098 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-metrics\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.193993 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-frr-sockets\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194357 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrrkc\" (UniqueName: \"kubernetes.io/projected/6c30e9a6-f8a5-471b-a98a-6488b00be9b3-kube-api-access-qrrkc\") pod \"frr-k8s-webhook-server-7784b6fcf-5cxjt\" (UID: \"6c30e9a6-f8a5-471b-a98a-6488b00be9b3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194394 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-memberlist\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194439 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/88af2613-0081-477e-983f-1d8a7a35f282-frr-startup\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194465 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7560ef5c-bc0a-42e7-9a1a-e610555272ad-metallb-excludel2\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194494 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbf8030a-b9cc-402c-90ae-51ee7b7e0883-cert\") pod \"controller-5bddd4b946-nn49h\" (UID: \"dbf8030a-b9cc-402c-90ae-51ee7b7e0883\") " pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194517 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hng2\" (UniqueName: \"kubernetes.io/projected/88af2613-0081-477e-983f-1d8a7a35f282-kube-api-access-6hng2\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194540 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbf8030a-b9cc-402c-90ae-51ee7b7e0883-metrics-certs\") pod \"controller-5bddd4b946-nn49h\" (UID: \"dbf8030a-b9cc-402c-90ae-51ee7b7e0883\") " pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194564 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8g77\" (UniqueName: \"kubernetes.io/projected/7560ef5c-bc0a-42e7-9a1a-e610555272ad-kube-api-access-c8g77\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194626 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-reloader\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194851 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88af2613-0081-477e-983f-1d8a7a35f282-metrics-certs\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.194914 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-frr-conf\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.195033 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-metrics\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.195140 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-metrics-certs\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.195184 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c30e9a6-f8a5-471b-a98a-6488b00be9b3-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-5cxjt\" (UID: \"6c30e9a6-f8a5-471b-a98a-6488b00be9b3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.195223 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzhnv\" (UniqueName: \"kubernetes.io/projected/dbf8030a-b9cc-402c-90ae-51ee7b7e0883-kube-api-access-kzhnv\") pod \"controller-5bddd4b946-nn49h\" (UID: \"dbf8030a-b9cc-402c-90ae-51ee7b7e0883\") " pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.195298 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-reloader\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.195364 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-frr-conf\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.195526 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-metrics\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.195960 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/88af2613-0081-477e-983f-1d8a7a35f282-frr-sockets\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.197622 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/88af2613-0081-477e-983f-1d8a7a35f282-frr-startup\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.201899 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88af2613-0081-477e-983f-1d8a7a35f282-metrics-certs\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.215754 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c30e9a6-f8a5-471b-a98a-6488b00be9b3-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-5cxjt\" (UID: \"6c30e9a6-f8a5-471b-a98a-6488b00be9b3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.219833 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hng2\" (UniqueName: \"kubernetes.io/projected/88af2613-0081-477e-983f-1d8a7a35f282-kube-api-access-6hng2\") pod \"frr-k8s-4hzvh\" (UID: \"88af2613-0081-477e-983f-1d8a7a35f282\") " pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.220594 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrrkc\" (UniqueName: \"kubernetes.io/projected/6c30e9a6-f8a5-471b-a98a-6488b00be9b3-kube-api-access-qrrkc\") pod \"frr-k8s-webhook-server-7784b6fcf-5cxjt\" (UID: \"6c30e9a6-f8a5-471b-a98a-6488b00be9b3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.270724 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.285422 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.295840 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7560ef5c-bc0a-42e7-9a1a-e610555272ad-metallb-excludel2\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.296415 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7560ef5c-bc0a-42e7-9a1a-e610555272ad-metallb-excludel2\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.296494 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbf8030a-b9cc-402c-90ae-51ee7b7e0883-cert\") pod \"controller-5bddd4b946-nn49h\" (UID: \"dbf8030a-b9cc-402c-90ae-51ee7b7e0883\") " pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.296519 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbf8030a-b9cc-402c-90ae-51ee7b7e0883-metrics-certs\") pod \"controller-5bddd4b946-nn49h\" (UID: \"dbf8030a-b9cc-402c-90ae-51ee7b7e0883\") " pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.296989 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8g77\" (UniqueName: \"kubernetes.io/projected/7560ef5c-bc0a-42e7-9a1a-e610555272ad-kube-api-access-c8g77\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.297096 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-metrics-certs\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.297134 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzhnv\" (UniqueName: \"kubernetes.io/projected/dbf8030a-b9cc-402c-90ae-51ee7b7e0883-kube-api-access-kzhnv\") pod \"controller-5bddd4b946-nn49h\" (UID: \"dbf8030a-b9cc-402c-90ae-51ee7b7e0883\") " pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.297168 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-memberlist\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: E0105 22:06:44.297270 5034 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 05 22:06:44 crc kubenswrapper[5034]: E0105 22:06:44.297310 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-memberlist podName:7560ef5c-bc0a-42e7-9a1a-e610555272ad nodeName:}" failed. No retries permitted until 2026-01-05 22:06:44.797296236 +0000 UTC m=+897.169295675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-memberlist") pod "speaker-k7tnh" (UID: "7560ef5c-bc0a-42e7-9a1a-e610555272ad") : secret "metallb-memberlist" not found Jan 05 22:06:44 crc kubenswrapper[5034]: E0105 22:06:44.297353 5034 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 05 22:06:44 crc kubenswrapper[5034]: E0105 22:06:44.297376 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-metrics-certs podName:7560ef5c-bc0a-42e7-9a1a-e610555272ad nodeName:}" failed. No retries permitted until 2026-01-05 22:06:44.797368018 +0000 UTC m=+897.169367457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-metrics-certs") pod "speaker-k7tnh" (UID: "7560ef5c-bc0a-42e7-9a1a-e610555272ad") : secret "speaker-certs-secret" not found Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.300018 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbf8030a-b9cc-402c-90ae-51ee7b7e0883-cert\") pod \"controller-5bddd4b946-nn49h\" (UID: \"dbf8030a-b9cc-402c-90ae-51ee7b7e0883\") " pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.301670 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbf8030a-b9cc-402c-90ae-51ee7b7e0883-metrics-certs\") pod \"controller-5bddd4b946-nn49h\" (UID: \"dbf8030a-b9cc-402c-90ae-51ee7b7e0883\") " pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.314022 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8g77\" (UniqueName: \"kubernetes.io/projected/7560ef5c-bc0a-42e7-9a1a-e610555272ad-kube-api-access-c8g77\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.322747 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzhnv\" (UniqueName: \"kubernetes.io/projected/dbf8030a-b9cc-402c-90ae-51ee7b7e0883-kube-api-access-kzhnv\") pod \"controller-5bddd4b946-nn49h\" (UID: \"dbf8030a-b9cc-402c-90ae-51ee7b7e0883\") " pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.366844 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.501105 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt"] Jan 05 22:06:44 crc kubenswrapper[5034]: W0105 22:06:44.526312 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c30e9a6_f8a5_471b_a98a_6488b00be9b3.slice/crio-55e2221c81e28b90c42f9d9bdca7c93b0a251d30c1270c6a71e2a76f366bd25e WatchSource:0}: Error finding container 55e2221c81e28b90c42f9d9bdca7c93b0a251d30c1270c6a71e2a76f366bd25e: Status 404 returned error can't find the container with id 55e2221c81e28b90c42f9d9bdca7c93b0a251d30c1270c6a71e2a76f366bd25e Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.576742 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-nn49h"] Jan 05 22:06:44 crc kubenswrapper[5034]: W0105 22:06:44.580274 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf8030a_b9cc_402c_90ae_51ee7b7e0883.slice/crio-eb3dbb7431323bc2fad26104b04820afc394ddcc5cb6d8f3cc58f1aa79180211 WatchSource:0}: Error finding container eb3dbb7431323bc2fad26104b04820afc394ddcc5cb6d8f3cc58f1aa79180211: Status 404 returned error can't find the container with id eb3dbb7431323bc2fad26104b04820afc394ddcc5cb6d8f3cc58f1aa79180211 Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.805711 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-metrics-certs\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.807131 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-memberlist\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.812282 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-memberlist\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.812646 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7560ef5c-bc0a-42e7-9a1a-e610555272ad-metrics-certs\") pod \"speaker-k7tnh\" (UID: \"7560ef5c-bc0a-42e7-9a1a-e610555272ad\") " pod="metallb-system/speaker-k7tnh" Jan 05 22:06:44 crc kubenswrapper[5034]: I0105 22:06:44.947716 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k7tnh" Jan 05 22:06:45 crc kubenswrapper[5034]: I0105 22:06:45.024479 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k7tnh" event={"ID":"7560ef5c-bc0a-42e7-9a1a-e610555272ad","Type":"ContainerStarted","Data":"a49797097454043f838040c5152b4ef7ce1141939958684e5267172111cf4c4c"} Jan 05 22:06:45 crc kubenswrapper[5034]: I0105 22:06:45.026028 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4hzvh" event={"ID":"88af2613-0081-477e-983f-1d8a7a35f282","Type":"ContainerStarted","Data":"0e3424805e172f58acdc0fe3ce65db611a607f80de4788356ec58cd695f74da1"} Jan 05 22:06:45 crc kubenswrapper[5034]: I0105 22:06:45.027323 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" event={"ID":"6c30e9a6-f8a5-471b-a98a-6488b00be9b3","Type":"ContainerStarted","Data":"55e2221c81e28b90c42f9d9bdca7c93b0a251d30c1270c6a71e2a76f366bd25e"} Jan 05 22:06:45 crc kubenswrapper[5034]: I0105 22:06:45.029238 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-nn49h" event={"ID":"dbf8030a-b9cc-402c-90ae-51ee7b7e0883","Type":"ContainerStarted","Data":"fc8b9f516aeffc15fc3b7985700389c0c4047d97bb26170141f9f07dc8452e09"} Jan 05 22:06:45 crc kubenswrapper[5034]: I0105 22:06:45.029260 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-nn49h" event={"ID":"dbf8030a-b9cc-402c-90ae-51ee7b7e0883","Type":"ContainerStarted","Data":"5b1213d6f5e798eb6b73f39bb6c7bb78c6a0d0da6280313da0a078c006143259"} Jan 05 22:06:45 crc kubenswrapper[5034]: I0105 22:06:45.029272 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-nn49h" event={"ID":"dbf8030a-b9cc-402c-90ae-51ee7b7e0883","Type":"ContainerStarted","Data":"eb3dbb7431323bc2fad26104b04820afc394ddcc5cb6d8f3cc58f1aa79180211"} Jan 05 22:06:45 crc kubenswrapper[5034]: I0105 22:06:45.029399 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:45 crc kubenswrapper[5034]: I0105 22:06:45.051608 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-nn49h" podStartSLOduration=1.051587949 podStartE2EDuration="1.051587949s" podCreationTimestamp="2026-01-05 22:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:06:45.047984766 +0000 UTC m=+897.419984215" watchObservedRunningTime="2026-01-05 22:06:45.051587949 +0000 UTC m=+897.423587408" Jan 05 22:06:46 crc kubenswrapper[5034]: I0105 22:06:46.044383 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k7tnh" event={"ID":"7560ef5c-bc0a-42e7-9a1a-e610555272ad","Type":"ContainerStarted","Data":"7d219f25ef00f12d690a6437948b74cdf5772f03121a1871853c8b806e130c0c"} Jan 05 22:06:46 crc kubenswrapper[5034]: I0105 22:06:46.044723 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-k7tnh" Jan 05 22:06:46 crc kubenswrapper[5034]: I0105 22:06:46.044736 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k7tnh" event={"ID":"7560ef5c-bc0a-42e7-9a1a-e610555272ad","Type":"ContainerStarted","Data":"30844bf0f17409062372836e5b452ccde5978c7987b67c916e326f5ff79f111a"} Jan 05 22:06:46 crc kubenswrapper[5034]: I0105 22:06:46.065580 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-k7tnh" podStartSLOduration=2.065558207 podStartE2EDuration="2.065558207s" podCreationTimestamp="2026-01-05 22:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:06:46.061415159 +0000 UTC m=+898.433414598" watchObservedRunningTime="2026-01-05 22:06:46.065558207 +0000 UTC m=+898.437557646" Jan 05 22:06:50 crc kubenswrapper[5034]: I0105 22:06:50.469215 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:06:50 crc kubenswrapper[5034]: I0105 22:06:50.469922 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:06:53 crc kubenswrapper[5034]: I0105 22:06:53.096242 5034 generic.go:334] "Generic (PLEG): container finished" podID="88af2613-0081-477e-983f-1d8a7a35f282" containerID="ec1a535fea788b3fa8a8ff6176d9db3f316c8a44272890c174bf8a89dd36782a" exitCode=0 Jan 05 22:06:53 crc kubenswrapper[5034]: I0105 22:06:53.096307 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4hzvh" event={"ID":"88af2613-0081-477e-983f-1d8a7a35f282","Type":"ContainerDied","Data":"ec1a535fea788b3fa8a8ff6176d9db3f316c8a44272890c174bf8a89dd36782a"} Jan 05 22:06:53 crc kubenswrapper[5034]: I0105 22:06:53.097812 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" event={"ID":"6c30e9a6-f8a5-471b-a98a-6488b00be9b3","Type":"ContainerStarted","Data":"29995531f3644185df01e23f737ae9e0ee8278c2eddc23f35c5970136daec4aa"} Jan 05 22:06:53 crc kubenswrapper[5034]: I0105 22:06:53.098132 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" Jan 05 22:06:53 crc kubenswrapper[5034]: I0105 22:06:53.142638 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" podStartSLOduration=2.346722321 podStartE2EDuration="10.142619938s" podCreationTimestamp="2026-01-05 22:06:43 +0000 UTC" firstStartedPulling="2026-01-05 22:06:44.531364024 +0000 UTC m=+896.903363453" lastFinishedPulling="2026-01-05 22:06:52.327261631 +0000 UTC m=+904.699261070" observedRunningTime="2026-01-05 22:06:53.139206581 +0000 UTC m=+905.511206020" watchObservedRunningTime="2026-01-05 22:06:53.142619938 +0000 UTC m=+905.514619377" Jan 05 22:06:54 crc kubenswrapper[5034]: I0105 22:06:54.112053 5034 generic.go:334] "Generic (PLEG): container finished" podID="88af2613-0081-477e-983f-1d8a7a35f282" containerID="58f3b3045307b948bd97bcb7559a0aab6c5693e2ff41f796a434fb643c55667e" exitCode=0 Jan 05 22:06:54 crc kubenswrapper[5034]: I0105 22:06:54.112262 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4hzvh" event={"ID":"88af2613-0081-477e-983f-1d8a7a35f282","Type":"ContainerDied","Data":"58f3b3045307b948bd97bcb7559a0aab6c5693e2ff41f796a434fb643c55667e"} Jan 05 22:06:54 crc kubenswrapper[5034]: I0105 22:06:54.370323 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-nn49h" Jan 05 22:06:55 crc kubenswrapper[5034]: I0105 22:06:55.118407 5034 generic.go:334] "Generic (PLEG): container finished" podID="88af2613-0081-477e-983f-1d8a7a35f282" containerID="d7934f53ffad2f11654510ebb2ed05801a454ce42e5778284dc0221ffcec2354" exitCode=0 Jan 05 22:06:55 crc kubenswrapper[5034]: I0105 22:06:55.118453 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4hzvh" event={"ID":"88af2613-0081-477e-983f-1d8a7a35f282","Type":"ContainerDied","Data":"d7934f53ffad2f11654510ebb2ed05801a454ce42e5778284dc0221ffcec2354"} Jan 05 22:06:56 crc kubenswrapper[5034]: I0105 22:06:56.128645 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4hzvh" event={"ID":"88af2613-0081-477e-983f-1d8a7a35f282","Type":"ContainerStarted","Data":"12f38939c5cd808700b8fdc033e5b61da11dbb622570bb41f684416f80f7366c"} Jan 05 22:06:56 crc kubenswrapper[5034]: I0105 22:06:56.128940 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4hzvh" event={"ID":"88af2613-0081-477e-983f-1d8a7a35f282","Type":"ContainerStarted","Data":"ed33553b7ed04a0481763036912a4ee206bfda9f938585465cb6c9e089636b37"} Jan 05 22:06:56 crc kubenswrapper[5034]: I0105 22:06:56.128959 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:56 crc kubenswrapper[5034]: I0105 22:06:56.128972 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4hzvh" event={"ID":"88af2613-0081-477e-983f-1d8a7a35f282","Type":"ContainerStarted","Data":"8af1fa5a27787cc610ed5c671ba32f4315fa1cc4ed09e5851821b6629ebf6811"} Jan 05 22:06:56 crc kubenswrapper[5034]: I0105 22:06:56.128985 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4hzvh" event={"ID":"88af2613-0081-477e-983f-1d8a7a35f282","Type":"ContainerStarted","Data":"3f2773b47f15a958f02ec9a686778a9877aa5dd9531bbd945d785fee79f5fbde"} Jan 05 22:06:56 crc kubenswrapper[5034]: I0105 22:06:56.128996 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4hzvh" event={"ID":"88af2613-0081-477e-983f-1d8a7a35f282","Type":"ContainerStarted","Data":"6e7dbfe8d1025d33cfc387e34a2e6d7f68b30bcc69da827bf6678afee88143af"} Jan 05 22:06:56 crc kubenswrapper[5034]: I0105 22:06:56.129008 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4hzvh" event={"ID":"88af2613-0081-477e-983f-1d8a7a35f282","Type":"ContainerStarted","Data":"771ff69c37e173b695c7dd6bd11705f1e4dc762df927a0873a11b12a3022a5b5"} Jan 05 22:06:56 crc kubenswrapper[5034]: I0105 22:06:56.151520 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4hzvh" podStartSLOduration=5.29209768 podStartE2EDuration="13.151503062s" podCreationTimestamp="2026-01-05 22:06:43 +0000 UTC" firstStartedPulling="2026-01-05 22:06:44.46786754 +0000 UTC m=+896.839866989" lastFinishedPulling="2026-01-05 22:06:52.327272932 +0000 UTC m=+904.699272371" observedRunningTime="2026-01-05 22:06:56.14720454 +0000 UTC m=+908.519203979" watchObservedRunningTime="2026-01-05 22:06:56.151503062 +0000 UTC m=+908.523502501" Jan 05 22:06:59 crc kubenswrapper[5034]: I0105 22:06:59.286335 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:06:59 crc kubenswrapper[5034]: I0105 22:06:59.324426 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:07:04 crc kubenswrapper[5034]: I0105 22:07:04.276584 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-5cxjt" Jan 05 22:07:04 crc kubenswrapper[5034]: I0105 22:07:04.951760 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-k7tnh" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.371375 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq"] Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.373125 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.375845 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.383938 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq"] Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.525398 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7478m\" (UniqueName: \"kubernetes.io/projected/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-kube-api-access-7478m\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.525460 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.525493 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.626652 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7478m\" (UniqueName: \"kubernetes.io/projected/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-kube-api-access-7478m\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.626735 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.626784 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.627426 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.627498 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.657919 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7478m\" (UniqueName: \"kubernetes.io/projected/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-kube-api-access-7478m\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:06 crc kubenswrapper[5034]: I0105 22:07:06.705240 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:07 crc kubenswrapper[5034]: I0105 22:07:07.168107 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq"] Jan 05 22:07:07 crc kubenswrapper[5034]: I0105 22:07:07.210166 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" event={"ID":"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22","Type":"ContainerStarted","Data":"1df301496a7211b84afdc07842fd83bab20248723cdbc9da8bfac90fab582bca"} Jan 05 22:07:08 crc kubenswrapper[5034]: I0105 22:07:08.216574 5034 generic.go:334] "Generic (PLEG): container finished" podID="6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" containerID="dce1463b79bf90a781a652cbe252aa57043a072eaa1e8ed44d22a505a3baf79d" exitCode=0 Jan 05 22:07:08 crc kubenswrapper[5034]: I0105 22:07:08.216668 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" event={"ID":"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22","Type":"ContainerDied","Data":"dce1463b79bf90a781a652cbe252aa57043a072eaa1e8ed44d22a505a3baf79d"} Jan 05 22:07:13 crc kubenswrapper[5034]: I0105 22:07:13.246705 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" event={"ID":"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22","Type":"ContainerStarted","Data":"a7568391cf1bfa8ed6cd3269e1b9314d44001b11e5081a77947cae322026bd48"} Jan 05 22:07:14 crc kubenswrapper[5034]: I0105 22:07:14.254412 5034 generic.go:334] "Generic (PLEG): container finished" podID="6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" containerID="a7568391cf1bfa8ed6cd3269e1b9314d44001b11e5081a77947cae322026bd48" exitCode=0 Jan 05 22:07:14 crc kubenswrapper[5034]: I0105 22:07:14.254456 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" event={"ID":"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22","Type":"ContainerDied","Data":"a7568391cf1bfa8ed6cd3269e1b9314d44001b11e5081a77947cae322026bd48"} Jan 05 22:07:14 crc kubenswrapper[5034]: I0105 22:07:14.295432 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4hzvh" Jan 05 22:07:15 crc kubenswrapper[5034]: I0105 22:07:15.261398 5034 generic.go:334] "Generic (PLEG): container finished" podID="6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" containerID="d02c1538ceccc4e6108fa94305bf7369d88f2587080c04311984bc1bf2d9d8b5" exitCode=0 Jan 05 22:07:15 crc kubenswrapper[5034]: I0105 22:07:15.261457 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" event={"ID":"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22","Type":"ContainerDied","Data":"d02c1538ceccc4e6108fa94305bf7369d88f2587080c04311984bc1bf2d9d8b5"} Jan 05 22:07:16 crc kubenswrapper[5034]: I0105 22:07:16.567457 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:16 crc kubenswrapper[5034]: I0105 22:07:16.659064 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-util\") pod \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " Jan 05 22:07:16 crc kubenswrapper[5034]: I0105 22:07:16.659187 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7478m\" (UniqueName: \"kubernetes.io/projected/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-kube-api-access-7478m\") pod \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " Jan 05 22:07:16 crc kubenswrapper[5034]: I0105 22:07:16.659215 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-bundle\") pod \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\" (UID: \"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22\") " Jan 05 22:07:16 crc kubenswrapper[5034]: I0105 22:07:16.660343 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-bundle" (OuterVolumeSpecName: "bundle") pod "6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" (UID: "6178a12f-8e0f-4038-9bd2-e8a21d4dcd22"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:07:16 crc kubenswrapper[5034]: I0105 22:07:16.663911 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-kube-api-access-7478m" (OuterVolumeSpecName: "kube-api-access-7478m") pod "6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" (UID: "6178a12f-8e0f-4038-9bd2-e8a21d4dcd22"). InnerVolumeSpecName "kube-api-access-7478m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:07:16 crc kubenswrapper[5034]: I0105 22:07:16.670506 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-util" (OuterVolumeSpecName: "util") pod "6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" (UID: "6178a12f-8e0f-4038-9bd2-e8a21d4dcd22"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:07:16 crc kubenswrapper[5034]: I0105 22:07:16.760905 5034 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-util\") on node \"crc\" DevicePath \"\"" Jan 05 22:07:16 crc kubenswrapper[5034]: I0105 22:07:16.760957 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7478m\" (UniqueName: \"kubernetes.io/projected/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-kube-api-access-7478m\") on node \"crc\" DevicePath \"\"" Jan 05 22:07:16 crc kubenswrapper[5034]: I0105 22:07:16.760967 5034 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6178a12f-8e0f-4038-9bd2-e8a21d4dcd22-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:07:17 crc kubenswrapper[5034]: I0105 22:07:17.287174 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" event={"ID":"6178a12f-8e0f-4038-9bd2-e8a21d4dcd22","Type":"ContainerDied","Data":"1df301496a7211b84afdc07842fd83bab20248723cdbc9da8bfac90fab582bca"} Jan 05 22:07:17 crc kubenswrapper[5034]: I0105 22:07:17.287236 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df301496a7211b84afdc07842fd83bab20248723cdbc9da8bfac90fab582bca" Jan 05 22:07:17 crc kubenswrapper[5034]: I0105 22:07:17.287284 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq" Jan 05 22:07:20 crc kubenswrapper[5034]: I0105 22:07:20.469242 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:07:20 crc kubenswrapper[5034]: I0105 22:07:20.469536 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:07:23 crc kubenswrapper[5034]: I0105 22:07:23.906396 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4"] Jan 05 22:07:23 crc kubenswrapper[5034]: E0105 22:07:23.907796 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" containerName="pull" Jan 05 22:07:23 crc kubenswrapper[5034]: I0105 22:07:23.907884 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" containerName="pull" Jan 05 22:07:23 crc kubenswrapper[5034]: E0105 22:07:23.907961 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" containerName="util" Jan 05 22:07:23 crc kubenswrapper[5034]: I0105 22:07:23.908031 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" containerName="util" Jan 05 22:07:23 crc kubenswrapper[5034]: E0105 22:07:23.908173 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" containerName="extract" Jan 05 22:07:23 crc kubenswrapper[5034]: I0105 22:07:23.908249 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" containerName="extract" Jan 05 22:07:23 crc kubenswrapper[5034]: I0105 22:07:23.908629 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6178a12f-8e0f-4038-9bd2-e8a21d4dcd22" containerName="extract" Jan 05 22:07:23 crc kubenswrapper[5034]: I0105 22:07:23.909212 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" Jan 05 22:07:23 crc kubenswrapper[5034]: I0105 22:07:23.912814 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 05 22:07:23 crc kubenswrapper[5034]: I0105 22:07:23.913030 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 05 22:07:23 crc kubenswrapper[5034]: I0105 22:07:23.913755 5034 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-m5v8s" Jan 05 22:07:23 crc kubenswrapper[5034]: I0105 22:07:23.924092 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4"] Jan 05 22:07:24 crc kubenswrapper[5034]: I0105 22:07:24.079180 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/211c6363-8f6b-42d7-b9e5-9db752f5ecb1-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mzhg4\" (UID: \"211c6363-8f6b-42d7-b9e5-9db752f5ecb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" Jan 05 22:07:24 crc kubenswrapper[5034]: I0105 22:07:24.079398 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22qfv\" (UniqueName: \"kubernetes.io/projected/211c6363-8f6b-42d7-b9e5-9db752f5ecb1-kube-api-access-22qfv\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mzhg4\" (UID: \"211c6363-8f6b-42d7-b9e5-9db752f5ecb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" Jan 05 22:07:24 crc kubenswrapper[5034]: I0105 22:07:24.180234 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/211c6363-8f6b-42d7-b9e5-9db752f5ecb1-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mzhg4\" (UID: \"211c6363-8f6b-42d7-b9e5-9db752f5ecb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" Jan 05 22:07:24 crc kubenswrapper[5034]: I0105 22:07:24.180575 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22qfv\" (UniqueName: \"kubernetes.io/projected/211c6363-8f6b-42d7-b9e5-9db752f5ecb1-kube-api-access-22qfv\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mzhg4\" (UID: \"211c6363-8f6b-42d7-b9e5-9db752f5ecb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" Jan 05 22:07:24 crc kubenswrapper[5034]: I0105 22:07:24.180799 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/211c6363-8f6b-42d7-b9e5-9db752f5ecb1-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mzhg4\" (UID: \"211c6363-8f6b-42d7-b9e5-9db752f5ecb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" Jan 05 22:07:24 crc kubenswrapper[5034]: I0105 22:07:24.199296 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22qfv\" (UniqueName: \"kubernetes.io/projected/211c6363-8f6b-42d7-b9e5-9db752f5ecb1-kube-api-access-22qfv\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mzhg4\" (UID: \"211c6363-8f6b-42d7-b9e5-9db752f5ecb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" Jan 05 22:07:24 crc kubenswrapper[5034]: I0105 22:07:24.228194 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" Jan 05 22:07:24 crc kubenswrapper[5034]: I0105 22:07:24.683735 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4"] Jan 05 22:07:24 crc kubenswrapper[5034]: W0105 22:07:24.690200 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod211c6363_8f6b_42d7_b9e5_9db752f5ecb1.slice/crio-e885a8825d3579c8b1870ba68dbba7e333ffb18b56f9e1fc11c68f325418e9d4 WatchSource:0}: Error finding container e885a8825d3579c8b1870ba68dbba7e333ffb18b56f9e1fc11c68f325418e9d4: Status 404 returned error can't find the container with id e885a8825d3579c8b1870ba68dbba7e333ffb18b56f9e1fc11c68f325418e9d4 Jan 05 22:07:25 crc kubenswrapper[5034]: I0105 22:07:25.332668 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" event={"ID":"211c6363-8f6b-42d7-b9e5-9db752f5ecb1","Type":"ContainerStarted","Data":"e885a8825d3579c8b1870ba68dbba7e333ffb18b56f9e1fc11c68f325418e9d4"} Jan 05 22:07:33 crc kubenswrapper[5034]: I0105 22:07:33.384343 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" event={"ID":"211c6363-8f6b-42d7-b9e5-9db752f5ecb1","Type":"ContainerStarted","Data":"25454c70fcdfa9c2c6af2b241bdb0cbb9057c9002a65a4fb3249d6dbcd749cf1"} Jan 05 22:07:33 crc kubenswrapper[5034]: I0105 22:07:33.407978 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mzhg4" podStartSLOduration=2.088762757 podStartE2EDuration="10.407961766s" podCreationTimestamp="2026-01-05 22:07:23 +0000 UTC" firstStartedPulling="2026-01-05 22:07:24.691236677 +0000 UTC m=+937.063236126" lastFinishedPulling="2026-01-05 22:07:33.010435696 +0000 UTC m=+945.382435135" observedRunningTime="2026-01-05 22:07:33.402091589 +0000 UTC m=+945.774091018" watchObservedRunningTime="2026-01-05 22:07:33.407961766 +0000 UTC m=+945.779961205" Jan 05 22:07:35 crc kubenswrapper[5034]: I0105 22:07:35.927814 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-nf4xl"] Jan 05 22:07:35 crc kubenswrapper[5034]: I0105 22:07:35.928946 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" Jan 05 22:07:35 crc kubenswrapper[5034]: I0105 22:07:35.930232 5034 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ssw84" Jan 05 22:07:35 crc kubenswrapper[5034]: I0105 22:07:35.931020 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 05 22:07:35 crc kubenswrapper[5034]: I0105 22:07:35.934479 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 05 22:07:35 crc kubenswrapper[5034]: I0105 22:07:35.943870 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-nf4xl"] Jan 05 22:07:36 crc kubenswrapper[5034]: I0105 22:07:36.047592 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wntz\" (UniqueName: \"kubernetes.io/projected/d51296ce-60f9-44a1-8fce-7915e000ee74-kube-api-access-7wntz\") pod \"cert-manager-webhook-f4fb5df64-nf4xl\" (UID: \"d51296ce-60f9-44a1-8fce-7915e000ee74\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" Jan 05 22:07:36 crc kubenswrapper[5034]: I0105 22:07:36.047665 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d51296ce-60f9-44a1-8fce-7915e000ee74-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-nf4xl\" (UID: \"d51296ce-60f9-44a1-8fce-7915e000ee74\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" Jan 05 22:07:36 crc kubenswrapper[5034]: I0105 22:07:36.149345 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wntz\" (UniqueName: \"kubernetes.io/projected/d51296ce-60f9-44a1-8fce-7915e000ee74-kube-api-access-7wntz\") pod \"cert-manager-webhook-f4fb5df64-nf4xl\" (UID: \"d51296ce-60f9-44a1-8fce-7915e000ee74\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" Jan 05 22:07:36 crc kubenswrapper[5034]: I0105 22:07:36.149431 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d51296ce-60f9-44a1-8fce-7915e000ee74-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-nf4xl\" (UID: \"d51296ce-60f9-44a1-8fce-7915e000ee74\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" Jan 05 22:07:36 crc kubenswrapper[5034]: I0105 22:07:36.167244 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wntz\" (UniqueName: \"kubernetes.io/projected/d51296ce-60f9-44a1-8fce-7915e000ee74-kube-api-access-7wntz\") pod \"cert-manager-webhook-f4fb5df64-nf4xl\" (UID: \"d51296ce-60f9-44a1-8fce-7915e000ee74\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" Jan 05 22:07:36 crc kubenswrapper[5034]: I0105 22:07:36.173902 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d51296ce-60f9-44a1-8fce-7915e000ee74-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-nf4xl\" (UID: \"d51296ce-60f9-44a1-8fce-7915e000ee74\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" Jan 05 22:07:36 crc kubenswrapper[5034]: I0105 22:07:36.261507 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" Jan 05 22:07:36 crc kubenswrapper[5034]: I0105 22:07:36.654046 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-nf4xl"] Jan 05 22:07:36 crc kubenswrapper[5034]: W0105 22:07:36.658609 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd51296ce_60f9_44a1_8fce_7915e000ee74.slice/crio-20375bc6c05ccea78b5ef6bcd61999458c5e6bcfa60af6720fcf52a1ae496929 WatchSource:0}: Error finding container 20375bc6c05ccea78b5ef6bcd61999458c5e6bcfa60af6720fcf52a1ae496929: Status 404 returned error can't find the container with id 20375bc6c05ccea78b5ef6bcd61999458c5e6bcfa60af6720fcf52a1ae496929 Jan 05 22:07:37 crc kubenswrapper[5034]: I0105 22:07:37.410002 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" event={"ID":"d51296ce-60f9-44a1-8fce-7915e000ee74","Type":"ContainerStarted","Data":"20375bc6c05ccea78b5ef6bcd61999458c5e6bcfa60af6720fcf52a1ae496929"} Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.085288 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-twtrf"] Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.086253 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.088721 5034 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xznv2" Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.093210 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-twtrf"] Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.177696 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e28a75e2-b589-439a-ad3e-95fbe1db1d9c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-twtrf\" (UID: \"e28a75e2-b589-439a-ad3e-95fbe1db1d9c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.177779 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzcpl\" (UniqueName: \"kubernetes.io/projected/e28a75e2-b589-439a-ad3e-95fbe1db1d9c-kube-api-access-lzcpl\") pod \"cert-manager-cainjector-855d9ccff4-twtrf\" (UID: \"e28a75e2-b589-439a-ad3e-95fbe1db1d9c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.286595 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzcpl\" (UniqueName: \"kubernetes.io/projected/e28a75e2-b589-439a-ad3e-95fbe1db1d9c-kube-api-access-lzcpl\") pod \"cert-manager-cainjector-855d9ccff4-twtrf\" (UID: \"e28a75e2-b589-439a-ad3e-95fbe1db1d9c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.287307 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e28a75e2-b589-439a-ad3e-95fbe1db1d9c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-twtrf\" (UID: \"e28a75e2-b589-439a-ad3e-95fbe1db1d9c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.307678 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e28a75e2-b589-439a-ad3e-95fbe1db1d9c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-twtrf\" (UID: \"e28a75e2-b589-439a-ad3e-95fbe1db1d9c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.311529 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzcpl\" (UniqueName: \"kubernetes.io/projected/e28a75e2-b589-439a-ad3e-95fbe1db1d9c-kube-api-access-lzcpl\") pod \"cert-manager-cainjector-855d9ccff4-twtrf\" (UID: \"e28a75e2-b589-439a-ad3e-95fbe1db1d9c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.404478 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" Jan 05 22:07:38 crc kubenswrapper[5034]: I0105 22:07:38.654662 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-twtrf"] Jan 05 22:07:39 crc kubenswrapper[5034]: I0105 22:07:39.425980 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" event={"ID":"e28a75e2-b589-439a-ad3e-95fbe1db1d9c","Type":"ContainerStarted","Data":"9910140140ddb4c88260edfca8a94de778ef0834b2f7c5ae3c0e71e42a06c31b"} Jan 05 22:07:50 crc kubenswrapper[5034]: I0105 22:07:50.469269 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:07:50 crc kubenswrapper[5034]: I0105 22:07:50.469875 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:07:50 crc kubenswrapper[5034]: I0105 22:07:50.469929 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:07:50 crc kubenswrapper[5034]: I0105 22:07:50.470609 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2ae88310c27c8bb417de34e2de1e513ef4f2cf46c667d74e4ed38e85d67a96f"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:07:50 crc kubenswrapper[5034]: I0105 22:07:50.470681 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://c2ae88310c27c8bb417de34e2de1e513ef4f2cf46c667d74e4ed38e85d67a96f" gracePeriod=600 Jan 05 22:07:50 crc kubenswrapper[5034]: I0105 22:07:50.521980 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" event={"ID":"d51296ce-60f9-44a1-8fce-7915e000ee74","Type":"ContainerStarted","Data":"91a207dee9d15a55a6a49f04eef07ab73a5c093850cc3620d9f16109fabe8ef3"} Jan 05 22:07:50 crc kubenswrapper[5034]: I0105 22:07:50.522120 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" Jan 05 22:07:50 crc kubenswrapper[5034]: I0105 22:07:50.523312 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" event={"ID":"e28a75e2-b589-439a-ad3e-95fbe1db1d9c","Type":"ContainerStarted","Data":"8f23ae2c3cb0fc796778fd3be08571b3fbc4ef0a001774096e092a6ff7f82b47"} Jan 05 22:07:50 crc kubenswrapper[5034]: I0105 22:07:50.534884 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" podStartSLOduration=2.681333992 podStartE2EDuration="15.534873478s" podCreationTimestamp="2026-01-05 22:07:35 +0000 UTC" firstStartedPulling="2026-01-05 22:07:36.660400297 +0000 UTC m=+949.032399736" lastFinishedPulling="2026-01-05 22:07:49.513939783 +0000 UTC m=+961.885939222" observedRunningTime="2026-01-05 22:07:50.5342334 +0000 UTC m=+962.906232859" watchObservedRunningTime="2026-01-05 22:07:50.534873478 +0000 UTC m=+962.906872917" Jan 05 22:07:50 crc kubenswrapper[5034]: I0105 22:07:50.553037 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-twtrf" podStartSLOduration=1.724648793 podStartE2EDuration="12.553020524s" podCreationTimestamp="2026-01-05 22:07:38 +0000 UTC" firstStartedPulling="2026-01-05 22:07:38.685608333 +0000 UTC m=+951.057607772" lastFinishedPulling="2026-01-05 22:07:49.513980064 +0000 UTC m=+961.885979503" observedRunningTime="2026-01-05 22:07:50.549473783 +0000 UTC m=+962.921473222" watchObservedRunningTime="2026-01-05 22:07:50.553020524 +0000 UTC m=+962.925019963" Jan 05 22:07:51 crc kubenswrapper[5034]: I0105 22:07:51.537464 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="c2ae88310c27c8bb417de34e2de1e513ef4f2cf46c667d74e4ed38e85d67a96f" exitCode=0 Jan 05 22:07:51 crc kubenswrapper[5034]: I0105 22:07:51.537538 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"c2ae88310c27c8bb417de34e2de1e513ef4f2cf46c667d74e4ed38e85d67a96f"} Jan 05 22:07:51 crc kubenswrapper[5034]: I0105 22:07:51.538149 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"7aa61e9f5aaa409d4332d17291c1246e891073205f554c85d6e919f6906d1cd4"} Jan 05 22:07:51 crc kubenswrapper[5034]: I0105 22:07:51.538178 5034 scope.go:117] "RemoveContainer" containerID="88445724b2a970c08e5f2c6402ea7e57704a8e3d0fb29457d3eb885ad064167b" Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.140662 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dxxgk"] Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.142129 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-dxxgk" Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.143883 5034 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-kmt4w" Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.158341 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dxxgk"] Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.257955 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd4gv\" (UniqueName: \"kubernetes.io/projected/4f0c5ff9-b60a-43a3-bfbc-4790b7622531-kube-api-access-dd4gv\") pod \"cert-manager-86cb77c54b-dxxgk\" (UID: \"4f0c5ff9-b60a-43a3-bfbc-4790b7622531\") " pod="cert-manager/cert-manager-86cb77c54b-dxxgk" Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.258010 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f0c5ff9-b60a-43a3-bfbc-4790b7622531-bound-sa-token\") pod \"cert-manager-86cb77c54b-dxxgk\" (UID: \"4f0c5ff9-b60a-43a3-bfbc-4790b7622531\") " pod="cert-manager/cert-manager-86cb77c54b-dxxgk" Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.359012 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd4gv\" (UniqueName: \"kubernetes.io/projected/4f0c5ff9-b60a-43a3-bfbc-4790b7622531-kube-api-access-dd4gv\") pod \"cert-manager-86cb77c54b-dxxgk\" (UID: \"4f0c5ff9-b60a-43a3-bfbc-4790b7622531\") " pod="cert-manager/cert-manager-86cb77c54b-dxxgk" Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.359109 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f0c5ff9-b60a-43a3-bfbc-4790b7622531-bound-sa-token\") pod \"cert-manager-86cb77c54b-dxxgk\" (UID: \"4f0c5ff9-b60a-43a3-bfbc-4790b7622531\") " pod="cert-manager/cert-manager-86cb77c54b-dxxgk" Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.377196 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f0c5ff9-b60a-43a3-bfbc-4790b7622531-bound-sa-token\") pod \"cert-manager-86cb77c54b-dxxgk\" (UID: \"4f0c5ff9-b60a-43a3-bfbc-4790b7622531\") " pod="cert-manager/cert-manager-86cb77c54b-dxxgk" Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.377392 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd4gv\" (UniqueName: \"kubernetes.io/projected/4f0c5ff9-b60a-43a3-bfbc-4790b7622531-kube-api-access-dd4gv\") pod \"cert-manager-86cb77c54b-dxxgk\" (UID: \"4f0c5ff9-b60a-43a3-bfbc-4790b7622531\") " pod="cert-manager/cert-manager-86cb77c54b-dxxgk" Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.466927 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-dxxgk" Jan 05 22:07:55 crc kubenswrapper[5034]: I0105 22:07:55.895287 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dxxgk"] Jan 05 22:07:56 crc kubenswrapper[5034]: I0105 22:07:56.265893 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-nf4xl" Jan 05 22:07:56 crc kubenswrapper[5034]: I0105 22:07:56.570991 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-dxxgk" event={"ID":"4f0c5ff9-b60a-43a3-bfbc-4790b7622531","Type":"ContainerStarted","Data":"ce7743be4e49aed37c15f1a1a34a635bd0168af8650f5ad0433791eebe57eeed"} Jan 05 22:07:56 crc kubenswrapper[5034]: I0105 22:07:56.571045 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-dxxgk" event={"ID":"4f0c5ff9-b60a-43a3-bfbc-4790b7622531","Type":"ContainerStarted","Data":"d9c8d3709f20e94eec8b21791c715795ac026e680c6ac5ecbdd47140ffab8675"} Jan 05 22:07:56 crc kubenswrapper[5034]: I0105 22:07:56.587738 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-dxxgk" podStartSLOduration=1.587717531 podStartE2EDuration="1.587717531s" podCreationTimestamp="2026-01-05 22:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:07:56.586412384 +0000 UTC m=+968.958411843" watchObservedRunningTime="2026-01-05 22:07:56.587717531 +0000 UTC m=+968.959716970" Jan 05 22:07:59 crc kubenswrapper[5034]: I0105 22:07:59.981998 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-879gg"] Jan 05 22:07:59 crc kubenswrapper[5034]: I0105 22:07:59.984368 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-879gg" Jan 05 22:07:59 crc kubenswrapper[5034]: I0105 22:07:59.987136 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 05 22:07:59 crc kubenswrapper[5034]: I0105 22:07:59.987212 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tpsc4" Jan 05 22:07:59 crc kubenswrapper[5034]: I0105 22:07:59.992521 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 05 22:07:59 crc kubenswrapper[5034]: I0105 22:07:59.998043 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-879gg"] Jan 05 22:08:00 crc kubenswrapper[5034]: I0105 22:08:00.124201 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqrg8\" (UniqueName: \"kubernetes.io/projected/a9e04d84-a71d-4cd6-a4a1-c8751af295f0-kube-api-access-xqrg8\") pod \"openstack-operator-index-879gg\" (UID: \"a9e04d84-a71d-4cd6-a4a1-c8751af295f0\") " pod="openstack-operators/openstack-operator-index-879gg" Jan 05 22:08:00 crc kubenswrapper[5034]: I0105 22:08:00.225042 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqrg8\" (UniqueName: \"kubernetes.io/projected/a9e04d84-a71d-4cd6-a4a1-c8751af295f0-kube-api-access-xqrg8\") pod \"openstack-operator-index-879gg\" (UID: \"a9e04d84-a71d-4cd6-a4a1-c8751af295f0\") " pod="openstack-operators/openstack-operator-index-879gg" Jan 05 22:08:00 crc kubenswrapper[5034]: I0105 22:08:00.244935 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqrg8\" (UniqueName: \"kubernetes.io/projected/a9e04d84-a71d-4cd6-a4a1-c8751af295f0-kube-api-access-xqrg8\") pod \"openstack-operator-index-879gg\" (UID: \"a9e04d84-a71d-4cd6-a4a1-c8751af295f0\") " pod="openstack-operators/openstack-operator-index-879gg" Jan 05 22:08:00 crc kubenswrapper[5034]: I0105 22:08:00.301721 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-879gg" Jan 05 22:08:00 crc kubenswrapper[5034]: I0105 22:08:00.687297 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-879gg"] Jan 05 22:08:00 crc kubenswrapper[5034]: W0105 22:08:00.693241 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9e04d84_a71d_4cd6_a4a1_c8751af295f0.slice/crio-b6a21b8c23fd512589d3456c5e5e9e812db53a61f515b645a2eac571a2e53159 WatchSource:0}: Error finding container b6a21b8c23fd512589d3456c5e5e9e812db53a61f515b645a2eac571a2e53159: Status 404 returned error can't find the container with id b6a21b8c23fd512589d3456c5e5e9e812db53a61f515b645a2eac571a2e53159 Jan 05 22:08:01 crc kubenswrapper[5034]: I0105 22:08:01.601677 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-879gg" event={"ID":"a9e04d84-a71d-4cd6-a4a1-c8751af295f0","Type":"ContainerStarted","Data":"20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0"} Jan 05 22:08:01 crc kubenswrapper[5034]: I0105 22:08:01.602558 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-879gg" event={"ID":"a9e04d84-a71d-4cd6-a4a1-c8751af295f0","Type":"ContainerStarted","Data":"b6a21b8c23fd512589d3456c5e5e9e812db53a61f515b645a2eac571a2e53159"} Jan 05 22:08:01 crc kubenswrapper[5034]: I0105 22:08:01.620111 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-879gg" podStartSLOduration=1.868616858 podStartE2EDuration="2.620069535s" podCreationTimestamp="2026-01-05 22:07:59 +0000 UTC" firstStartedPulling="2026-01-05 22:08:00.695443823 +0000 UTC m=+973.067443262" lastFinishedPulling="2026-01-05 22:08:01.4468965 +0000 UTC m=+973.818895939" observedRunningTime="2026-01-05 22:08:01.616171024 +0000 UTC m=+973.988170483" watchObservedRunningTime="2026-01-05 22:08:01.620069535 +0000 UTC m=+973.992068994" Jan 05 22:08:03 crc kubenswrapper[5034]: I0105 22:08:03.159808 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-879gg"] Jan 05 22:08:03 crc kubenswrapper[5034]: I0105 22:08:03.766581 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-d8q77"] Jan 05 22:08:03 crc kubenswrapper[5034]: I0105 22:08:03.767378 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d8q77" Jan 05 22:08:03 crc kubenswrapper[5034]: I0105 22:08:03.777188 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d8q77"] Jan 05 22:08:03 crc kubenswrapper[5034]: I0105 22:08:03.876110 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wn7\" (UniqueName: \"kubernetes.io/projected/b9d0176c-2e60-4822-930e-a59454554a09-kube-api-access-s8wn7\") pod \"openstack-operator-index-d8q77\" (UID: \"b9d0176c-2e60-4822-930e-a59454554a09\") " pod="openstack-operators/openstack-operator-index-d8q77" Jan 05 22:08:03 crc kubenswrapper[5034]: I0105 22:08:03.977378 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8wn7\" (UniqueName: \"kubernetes.io/projected/b9d0176c-2e60-4822-930e-a59454554a09-kube-api-access-s8wn7\") pod \"openstack-operator-index-d8q77\" (UID: \"b9d0176c-2e60-4822-930e-a59454554a09\") " pod="openstack-operators/openstack-operator-index-d8q77" Jan 05 22:08:03 crc kubenswrapper[5034]: I0105 22:08:03.997068 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8wn7\" (UniqueName: \"kubernetes.io/projected/b9d0176c-2e60-4822-930e-a59454554a09-kube-api-access-s8wn7\") pod \"openstack-operator-index-d8q77\" (UID: \"b9d0176c-2e60-4822-930e-a59454554a09\") " pod="openstack-operators/openstack-operator-index-d8q77" Jan 05 22:08:04 crc kubenswrapper[5034]: I0105 22:08:04.082463 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d8q77" Jan 05 22:08:04 crc kubenswrapper[5034]: I0105 22:08:04.469558 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d8q77"] Jan 05 22:08:04 crc kubenswrapper[5034]: W0105 22:08:04.476972 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d0176c_2e60_4822_930e_a59454554a09.slice/crio-6b4dacfdd9494b8dcf1d3e38955c7474630afb402f6eeda98e0c9ea990225281 WatchSource:0}: Error finding container 6b4dacfdd9494b8dcf1d3e38955c7474630afb402f6eeda98e0c9ea990225281: Status 404 returned error can't find the container with id 6b4dacfdd9494b8dcf1d3e38955c7474630afb402f6eeda98e0c9ea990225281 Jan 05 22:08:04 crc kubenswrapper[5034]: I0105 22:08:04.619045 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-879gg" podUID="a9e04d84-a71d-4cd6-a4a1-c8751af295f0" containerName="registry-server" containerID="cri-o://20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0" gracePeriod=2 Jan 05 22:08:04 crc kubenswrapper[5034]: I0105 22:08:04.619153 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d8q77" event={"ID":"b9d0176c-2e60-4822-930e-a59454554a09","Type":"ContainerStarted","Data":"6b4dacfdd9494b8dcf1d3e38955c7474630afb402f6eeda98e0c9ea990225281"} Jan 05 22:08:04 crc kubenswrapper[5034]: I0105 22:08:04.935177 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-879gg" Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.093437 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqrg8\" (UniqueName: \"kubernetes.io/projected/a9e04d84-a71d-4cd6-a4a1-c8751af295f0-kube-api-access-xqrg8\") pod \"a9e04d84-a71d-4cd6-a4a1-c8751af295f0\" (UID: \"a9e04d84-a71d-4cd6-a4a1-c8751af295f0\") " Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.098716 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e04d84-a71d-4cd6-a4a1-c8751af295f0-kube-api-access-xqrg8" (OuterVolumeSpecName: "kube-api-access-xqrg8") pod "a9e04d84-a71d-4cd6-a4a1-c8751af295f0" (UID: "a9e04d84-a71d-4cd6-a4a1-c8751af295f0"). InnerVolumeSpecName "kube-api-access-xqrg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.194723 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqrg8\" (UniqueName: \"kubernetes.io/projected/a9e04d84-a71d-4cd6-a4a1-c8751af295f0-kube-api-access-xqrg8\") on node \"crc\" DevicePath \"\"" Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.630353 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d8q77" event={"ID":"b9d0176c-2e60-4822-930e-a59454554a09","Type":"ContainerStarted","Data":"013a09cca9d67d11726aef9a287648645f392b25ae7ed8fc2fc9255842cbf0f7"} Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.632809 5034 generic.go:334] "Generic (PLEG): container finished" podID="a9e04d84-a71d-4cd6-a4a1-c8751af295f0" containerID="20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0" exitCode=0 Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.632860 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-879gg" event={"ID":"a9e04d84-a71d-4cd6-a4a1-c8751af295f0","Type":"ContainerDied","Data":"20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0"} Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.632892 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-879gg" event={"ID":"a9e04d84-a71d-4cd6-a4a1-c8751af295f0","Type":"ContainerDied","Data":"b6a21b8c23fd512589d3456c5e5e9e812db53a61f515b645a2eac571a2e53159"} Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.632913 5034 scope.go:117] "RemoveContainer" containerID="20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0" Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.632911 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-879gg" Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.653549 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-d8q77" podStartSLOduration=1.770869346 podStartE2EDuration="2.653523782s" podCreationTimestamp="2026-01-05 22:08:03 +0000 UTC" firstStartedPulling="2026-01-05 22:08:04.480290449 +0000 UTC m=+976.852289888" lastFinishedPulling="2026-01-05 22:08:05.362944885 +0000 UTC m=+977.734944324" observedRunningTime="2026-01-05 22:08:05.645911445 +0000 UTC m=+978.017910884" watchObservedRunningTime="2026-01-05 22:08:05.653523782 +0000 UTC m=+978.025523231" Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.655069 5034 scope.go:117] "RemoveContainer" containerID="20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0" Jan 05 22:08:05 crc kubenswrapper[5034]: E0105 22:08:05.655664 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0\": container with ID starting with 20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0 not found: ID does not exist" containerID="20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0" Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.655701 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0"} err="failed to get container status \"20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0\": rpc error: code = NotFound desc = could not find container \"20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0\": container with ID starting with 20b3459d3e6aa17973d4c61c24277b9b00e0dd0e321f11f10b8cffd21f4ebbc0 not found: ID does not exist" Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.677615 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-879gg"] Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.681394 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-879gg"] Jan 05 22:08:05 crc kubenswrapper[5034]: I0105 22:08:05.845150 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e04d84-a71d-4cd6-a4a1-c8751af295f0" path="/var/lib/kubelet/pods/a9e04d84-a71d-4cd6-a4a1-c8751af295f0/volumes" Jan 05 22:08:14 crc kubenswrapper[5034]: I0105 22:08:14.082746 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-d8q77" Jan 05 22:08:14 crc kubenswrapper[5034]: I0105 22:08:14.083457 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-d8q77" Jan 05 22:08:14 crc kubenswrapper[5034]: I0105 22:08:14.115501 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-d8q77" Jan 05 22:08:14 crc kubenswrapper[5034]: I0105 22:08:14.710859 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-d8q77" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.427326 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz"] Jan 05 22:08:15 crc kubenswrapper[5034]: E0105 22:08:15.427582 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e04d84-a71d-4cd6-a4a1-c8751af295f0" containerName="registry-server" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.427599 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e04d84-a71d-4cd6-a4a1-c8751af295f0" containerName="registry-server" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.427718 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e04d84-a71d-4cd6-a4a1-c8751af295f0" containerName="registry-server" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.428544 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.462813 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-v6hd5" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.464992 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz"] Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.564252 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-util\") pod \"c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.564402 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm527\" (UniqueName: \"kubernetes.io/projected/e164ea47-5917-4609-94fe-1befb86c13dc-kube-api-access-fm527\") pod \"c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.564486 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-bundle\") pod \"c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.666558 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-util\") pod \"c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.666709 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm527\" (UniqueName: \"kubernetes.io/projected/e164ea47-5917-4609-94fe-1befb86c13dc-kube-api-access-fm527\") pod \"c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.666787 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-bundle\") pod \"c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.667105 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-util\") pod \"c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.667379 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-bundle\") pod \"c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.687274 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm527\" (UniqueName: \"kubernetes.io/projected/e164ea47-5917-4609-94fe-1befb86c13dc-kube-api-access-fm527\") pod \"c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.777840 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:15 crc kubenswrapper[5034]: I0105 22:08:15.971973 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz"] Jan 05 22:08:16 crc kubenswrapper[5034]: I0105 22:08:16.711800 5034 generic.go:334] "Generic (PLEG): container finished" podID="e164ea47-5917-4609-94fe-1befb86c13dc" containerID="b0956e4335f8c0ed1a59165a5150cf2f3906c2dc5f22ededc9850e2383d26369" exitCode=0 Jan 05 22:08:16 crc kubenswrapper[5034]: I0105 22:08:16.711843 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" event={"ID":"e164ea47-5917-4609-94fe-1befb86c13dc","Type":"ContainerDied","Data":"b0956e4335f8c0ed1a59165a5150cf2f3906c2dc5f22ededc9850e2383d26369"} Jan 05 22:08:16 crc kubenswrapper[5034]: I0105 22:08:16.711890 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" event={"ID":"e164ea47-5917-4609-94fe-1befb86c13dc","Type":"ContainerStarted","Data":"97ffeb5f8ee40e0c564d725870746a763c5116999aaf96efc31104b2d8b2a2b9"} Jan 05 22:08:19 crc kubenswrapper[5034]: I0105 22:08:19.759165 5034 generic.go:334] "Generic (PLEG): container finished" podID="e164ea47-5917-4609-94fe-1befb86c13dc" containerID="81726ff5b480ccce21100a4e3bdb630e457e8404019303c5946b1e8c929d93d9" exitCode=0 Jan 05 22:08:19 crc kubenswrapper[5034]: I0105 22:08:19.759269 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" event={"ID":"e164ea47-5917-4609-94fe-1befb86c13dc","Type":"ContainerDied","Data":"81726ff5b480ccce21100a4e3bdb630e457e8404019303c5946b1e8c929d93d9"} Jan 05 22:08:20 crc kubenswrapper[5034]: I0105 22:08:20.768529 5034 generic.go:334] "Generic (PLEG): container finished" podID="e164ea47-5917-4609-94fe-1befb86c13dc" containerID="7b0ba6722e0cdfa2d7342d93c48c2aa2a4ee7b7089abedf9415ad02aa24422bf" exitCode=0 Jan 05 22:08:20 crc kubenswrapper[5034]: I0105 22:08:20.768631 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" event={"ID":"e164ea47-5917-4609-94fe-1befb86c13dc","Type":"ContainerDied","Data":"7b0ba6722e0cdfa2d7342d93c48c2aa2a4ee7b7089abedf9415ad02aa24422bf"} Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.040446 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.158808 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-bundle\") pod \"e164ea47-5917-4609-94fe-1befb86c13dc\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.159170 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-util\") pod \"e164ea47-5917-4609-94fe-1befb86c13dc\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.159207 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm527\" (UniqueName: \"kubernetes.io/projected/e164ea47-5917-4609-94fe-1befb86c13dc-kube-api-access-fm527\") pod \"e164ea47-5917-4609-94fe-1befb86c13dc\" (UID: \"e164ea47-5917-4609-94fe-1befb86c13dc\") " Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.160443 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-bundle" (OuterVolumeSpecName: "bundle") pod "e164ea47-5917-4609-94fe-1befb86c13dc" (UID: "e164ea47-5917-4609-94fe-1befb86c13dc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.166217 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e164ea47-5917-4609-94fe-1befb86c13dc-kube-api-access-fm527" (OuterVolumeSpecName: "kube-api-access-fm527") pod "e164ea47-5917-4609-94fe-1befb86c13dc" (UID: "e164ea47-5917-4609-94fe-1befb86c13dc"). InnerVolumeSpecName "kube-api-access-fm527". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.170062 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-util" (OuterVolumeSpecName: "util") pod "e164ea47-5917-4609-94fe-1befb86c13dc" (UID: "e164ea47-5917-4609-94fe-1befb86c13dc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.260743 5034 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.260778 5034 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e164ea47-5917-4609-94fe-1befb86c13dc-util\") on node \"crc\" DevicePath \"\"" Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.260790 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm527\" (UniqueName: \"kubernetes.io/projected/e164ea47-5917-4609-94fe-1befb86c13dc-kube-api-access-fm527\") on node \"crc\" DevicePath \"\"" Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.789337 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" event={"ID":"e164ea47-5917-4609-94fe-1befb86c13dc","Type":"ContainerDied","Data":"97ffeb5f8ee40e0c564d725870746a763c5116999aaf96efc31104b2d8b2a2b9"} Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.789439 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97ffeb5f8ee40e0c564d725870746a763c5116999aaf96efc31104b2d8b2a2b9" Jan 05 22:08:22 crc kubenswrapper[5034]: I0105 22:08:22.789384 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.630432 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g"] Jan 05 22:08:28 crc kubenswrapper[5034]: E0105 22:08:28.631144 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e164ea47-5917-4609-94fe-1befb86c13dc" containerName="util" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.631162 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e164ea47-5917-4609-94fe-1befb86c13dc" containerName="util" Jan 05 22:08:28 crc kubenswrapper[5034]: E0105 22:08:28.631172 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e164ea47-5917-4609-94fe-1befb86c13dc" containerName="pull" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.631180 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e164ea47-5917-4609-94fe-1befb86c13dc" containerName="pull" Jan 05 22:08:28 crc kubenswrapper[5034]: E0105 22:08:28.631205 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e164ea47-5917-4609-94fe-1befb86c13dc" containerName="extract" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.631213 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e164ea47-5917-4609-94fe-1befb86c13dc" containerName="extract" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.631381 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e164ea47-5917-4609-94fe-1befb86c13dc" containerName="extract" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.631836 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.634327 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-8hvdv" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.644251 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bqdc\" (UniqueName: \"kubernetes.io/projected/1dd0be48-f659-4579-9170-4525ab5afc33-kube-api-access-2bqdc\") pod \"openstack-operator-controller-operator-5845bc5b8-d878g\" (UID: \"1dd0be48-f659-4579-9170-4525ab5afc33\") " pod="openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.689942 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g"] Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.745852 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bqdc\" (UniqueName: \"kubernetes.io/projected/1dd0be48-f659-4579-9170-4525ab5afc33-kube-api-access-2bqdc\") pod \"openstack-operator-controller-operator-5845bc5b8-d878g\" (UID: \"1dd0be48-f659-4579-9170-4525ab5afc33\") " pod="openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.773265 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bqdc\" (UniqueName: \"kubernetes.io/projected/1dd0be48-f659-4579-9170-4525ab5afc33-kube-api-access-2bqdc\") pod \"openstack-operator-controller-operator-5845bc5b8-d878g\" (UID: \"1dd0be48-f659-4579-9170-4525ab5afc33\") " pod="openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g" Jan 05 22:08:28 crc kubenswrapper[5034]: I0105 22:08:28.949322 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g" Jan 05 22:08:29 crc kubenswrapper[5034]: I0105 22:08:29.226676 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g"] Jan 05 22:08:29 crc kubenswrapper[5034]: I0105 22:08:29.829460 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g" event={"ID":"1dd0be48-f659-4579-9170-4525ab5afc33","Type":"ContainerStarted","Data":"f8c7c83e53f4781c5838837a8e8ec6a063d74ebcca4061176d9085cc0d6afd71"} Jan 05 22:08:34 crc kubenswrapper[5034]: I0105 22:08:34.870554 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g" event={"ID":"1dd0be48-f659-4579-9170-4525ab5afc33","Type":"ContainerStarted","Data":"5c05bf66e1398f6078f0b1c90a65f0d50ec513daa039492d0d02014c61090cb4"} Jan 05 22:08:34 crc kubenswrapper[5034]: I0105 22:08:34.871150 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g" Jan 05 22:08:34 crc kubenswrapper[5034]: I0105 22:08:34.901626 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g" podStartSLOduration=2.06667469 podStartE2EDuration="6.901610843s" podCreationTimestamp="2026-01-05 22:08:28 +0000 UTC" firstStartedPulling="2026-01-05 22:08:29.265792072 +0000 UTC m=+1001.637791511" lastFinishedPulling="2026-01-05 22:08:34.100728225 +0000 UTC m=+1006.472727664" observedRunningTime="2026-01-05 22:08:34.897415013 +0000 UTC m=+1007.269414442" watchObservedRunningTime="2026-01-05 22:08:34.901610843 +0000 UTC m=+1007.273610282" Jan 05 22:08:48 crc kubenswrapper[5034]: I0105 22:08:48.953197 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5845bc5b8-d878g" Jan 05 22:09:06 crc kubenswrapper[5034]: I0105 22:09:06.981856 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp"] Jan 05 22:09:06 crc kubenswrapper[5034]: I0105 22:09:06.983265 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp" Jan 05 22:09:06 crc kubenswrapper[5034]: I0105 22:09:06.988864 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-c7trn" Jan 05 22:09:06 crc kubenswrapper[5034]: I0105 22:09:06.995938 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.002337 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.003169 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.007294 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-hz79g" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.010432 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.011180 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.013318 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-nl6hv" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.023418 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.055426 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.070388 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2bcn\" (UniqueName: \"kubernetes.io/projected/496eda61-616b-4c26-8a21-f7c32d44b301-kube-api-access-s2bcn\") pod \"designate-operator-controller-manager-66f8b87655-jvpk6\" (UID: \"496eda61-616b-4c26-8a21-f7c32d44b301\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.070512 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ts96\" (UniqueName: \"kubernetes.io/projected/6fd87fd1-9317-4df1-be11-c509d9643f84-kube-api-access-8ts96\") pod \"barbican-operator-controller-manager-f6f74d6db-svkbp\" (UID: \"6fd87fd1-9317-4df1-be11-c509d9643f84\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.070582 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtcm\" (UniqueName: \"kubernetes.io/projected/896815d6-faea-4d19-ac51-a51653fcb729-kube-api-access-vrtcm\") pod \"cinder-operator-controller-manager-78979fc445-kgxlf\" (UID: \"896815d6-faea-4d19-ac51-a51653fcb729\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.116249 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-z872k"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.117208 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-z872k" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.120936 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-46687" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.123894 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-z872k"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.152788 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.153701 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.159225 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ls9b4" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.161781 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.169414 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.170199 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.171286 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2bcn\" (UniqueName: \"kubernetes.io/projected/496eda61-616b-4c26-8a21-f7c32d44b301-kube-api-access-s2bcn\") pod \"designate-operator-controller-manager-66f8b87655-jvpk6\" (UID: \"496eda61-616b-4c26-8a21-f7c32d44b301\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.171340 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ts96\" (UniqueName: \"kubernetes.io/projected/6fd87fd1-9317-4df1-be11-c509d9643f84-kube-api-access-8ts96\") pod \"barbican-operator-controller-manager-f6f74d6db-svkbp\" (UID: \"6fd87fd1-9317-4df1-be11-c509d9643f84\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.171377 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7mfd\" (UniqueName: \"kubernetes.io/projected/1649d2ab-0b0e-475a-be2b-485845105d31-kube-api-access-q7mfd\") pod \"glance-operator-controller-manager-7b549fc966-z872k\" (UID: \"1649d2ab-0b0e-475a-be2b-485845105d31\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-z872k" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.171405 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrtcm\" (UniqueName: \"kubernetes.io/projected/896815d6-faea-4d19-ac51-a51653fcb729-kube-api-access-vrtcm\") pod \"cinder-operator-controller-manager-78979fc445-kgxlf\" (UID: \"896815d6-faea-4d19-ac51-a51653fcb729\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.171423 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6mtg\" (UniqueName: \"kubernetes.io/projected/eaf2fcd8-230f-414c-88dc-68ccb91b009e-kube-api-access-t6mtg\") pod \"heat-operator-controller-manager-658dd65b86-xjmnm\" (UID: \"eaf2fcd8-230f-414c-88dc-68ccb91b009e\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.175187 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-b9pxp" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.196337 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.209495 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.210390 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.217776 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrtcm\" (UniqueName: \"kubernetes.io/projected/896815d6-faea-4d19-ac51-a51653fcb729-kube-api-access-vrtcm\") pod \"cinder-operator-controller-manager-78979fc445-kgxlf\" (UID: \"896815d6-faea-4d19-ac51-a51653fcb729\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.217959 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.218107 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8p89z" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.225366 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2bcn\" (UniqueName: \"kubernetes.io/projected/496eda61-616b-4c26-8a21-f7c32d44b301-kube-api-access-s2bcn\") pod \"designate-operator-controller-manager-66f8b87655-jvpk6\" (UID: \"496eda61-616b-4c26-8a21-f7c32d44b301\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.226204 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ts96\" (UniqueName: \"kubernetes.io/projected/6fd87fd1-9317-4df1-be11-c509d9643f84-kube-api-access-8ts96\") pod \"barbican-operator-controller-manager-f6f74d6db-svkbp\" (UID: \"6fd87fd1-9317-4df1-be11-c509d9643f84\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.231987 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.232761 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.242867 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-89csr" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.250205 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.254886 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.266341 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.267106 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.272827 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.272889 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp8kq\" (UniqueName: \"kubernetes.io/projected/55117dc3-bdf7-4967-830e-8465bd939669-kube-api-access-tp8kq\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.272927 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4258d\" (UniqueName: \"kubernetes.io/projected/c0ad1066-4da0-43bb-8599-3bd8a5e445f4-kube-api-access-4258d\") pod \"ironic-operator-controller-manager-f99f54bc8-w9zgw\" (UID: \"c0ad1066-4da0-43bb-8599-3bd8a5e445f4\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.272967 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7mfd\" (UniqueName: \"kubernetes.io/projected/1649d2ab-0b0e-475a-be2b-485845105d31-kube-api-access-q7mfd\") pod \"glance-operator-controller-manager-7b549fc966-z872k\" (UID: \"1649d2ab-0b0e-475a-be2b-485845105d31\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-z872k" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.272998 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6mtg\" (UniqueName: \"kubernetes.io/projected/eaf2fcd8-230f-414c-88dc-68ccb91b009e-kube-api-access-t6mtg\") pod \"heat-operator-controller-manager-658dd65b86-xjmnm\" (UID: \"eaf2fcd8-230f-414c-88dc-68ccb91b009e\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.273054 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp44h\" (UniqueName: \"kubernetes.io/projected/8cb3a336-40ad-44f2-8817-09d0d9807a1a-kube-api-access-fp44h\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-k2nrc\" (UID: \"8cb3a336-40ad-44f2-8817-09d0d9807a1a\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.273649 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kzb6m" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.287188 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.305175 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.306256 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.310562 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.322741 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-m99pz" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.329458 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.332006 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.337153 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.347130 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.348120 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.349408 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.349998 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.354449 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6mtg\" (UniqueName: \"kubernetes.io/projected/eaf2fcd8-230f-414c-88dc-68ccb91b009e-kube-api-access-t6mtg\") pod \"heat-operator-controller-manager-658dd65b86-xjmnm\" (UID: \"eaf2fcd8-230f-414c-88dc-68ccb91b009e\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.354873 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-v4kvt" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.355258 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mhg9n" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.375288 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp8kq\" (UniqueName: \"kubernetes.io/projected/55117dc3-bdf7-4967-830e-8465bd939669-kube-api-access-tp8kq\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.375574 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjd6w\" (UniqueName: \"kubernetes.io/projected/3a788872-b35e-4386-97a0-55b225e77f3c-kube-api-access-pjd6w\") pod \"mariadb-operator-controller-manager-7b88bfc995-cg7pq\" (UID: \"3a788872-b35e-4386-97a0-55b225e77f3c\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.375682 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4258d\" (UniqueName: \"kubernetes.io/projected/c0ad1066-4da0-43bb-8599-3bd8a5e445f4-kube-api-access-4258d\") pod \"ironic-operator-controller-manager-f99f54bc8-w9zgw\" (UID: \"c0ad1066-4da0-43bb-8599-3bd8a5e445f4\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.375792 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z5rh\" (UniqueName: \"kubernetes.io/projected/40b7d083-f9c2-4114-9fea-7b205a0f2699-kube-api-access-5z5rh\") pod \"keystone-operator-controller-manager-568985c78-dqv5l\" (UID: \"40b7d083-f9c2-4114-9fea-7b205a0f2699\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.375891 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlkrm\" (UniqueName: \"kubernetes.io/projected/7ab1c07d-8d12-4d71-b191-3334da2b04dd-kube-api-access-jlkrm\") pod \"manila-operator-controller-manager-598945d5b8-4lxms\" (UID: \"7ab1c07d-8d12-4d71-b191-3334da2b04dd\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.376026 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp44h\" (UniqueName: \"kubernetes.io/projected/8cb3a336-40ad-44f2-8817-09d0d9807a1a-kube-api-access-fp44h\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-k2nrc\" (UID: \"8cb3a336-40ad-44f2-8817-09d0d9807a1a\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.376158 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvk7f\" (UniqueName: \"kubernetes.io/projected/862bb25b-65e6-4866-a881-99ff200bd44c-kube-api-access-rvk7f\") pod \"neutron-operator-controller-manager-7cd87b778f-xj8gg\" (UID: \"862bb25b-65e6-4866-a881-99ff200bd44c\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.376245 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:07 crc kubenswrapper[5034]: E0105 22:09:07.376407 5034 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 22:09:07 crc kubenswrapper[5034]: E0105 22:09:07.376508 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert podName:55117dc3-bdf7-4967-830e-8465bd939669 nodeName:}" failed. No retries permitted until 2026-01-05 22:09:07.876489607 +0000 UTC m=+1040.248489036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert") pod "infra-operator-controller-manager-6d99759cf-qjlr7" (UID: "55117dc3-bdf7-4967-830e-8465bd939669") : secret "infra-operator-webhook-server-cert" not found Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.395092 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7mfd\" (UniqueName: \"kubernetes.io/projected/1649d2ab-0b0e-475a-be2b-485845105d31-kube-api-access-q7mfd\") pod \"glance-operator-controller-manager-7b549fc966-z872k\" (UID: \"1649d2ab-0b0e-475a-be2b-485845105d31\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-z872k" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.404584 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.405466 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.406017 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp8kq\" (UniqueName: \"kubernetes.io/projected/55117dc3-bdf7-4967-830e-8465bd939669-kube-api-access-tp8kq\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.409027 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-xr5sz" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.419636 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4258d\" (UniqueName: \"kubernetes.io/projected/c0ad1066-4da0-43bb-8599-3bd8a5e445f4-kube-api-access-4258d\") pod \"ironic-operator-controller-manager-f99f54bc8-w9zgw\" (UID: \"c0ad1066-4da0-43bb-8599-3bd8a5e445f4\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.419987 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.420638 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp44h\" (UniqueName: \"kubernetes.io/projected/8cb3a336-40ad-44f2-8817-09d0d9807a1a-kube-api-access-fp44h\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-k2nrc\" (UID: \"8cb3a336-40ad-44f2-8817-09d0d9807a1a\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.449469 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-z872k" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.469247 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.477665 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjd6w\" (UniqueName: \"kubernetes.io/projected/3a788872-b35e-4386-97a0-55b225e77f3c-kube-api-access-pjd6w\") pod \"mariadb-operator-controller-manager-7b88bfc995-cg7pq\" (UID: \"3a788872-b35e-4386-97a0-55b225e77f3c\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.477738 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z5rh\" (UniqueName: \"kubernetes.io/projected/40b7d083-f9c2-4114-9fea-7b205a0f2699-kube-api-access-5z5rh\") pod \"keystone-operator-controller-manager-568985c78-dqv5l\" (UID: \"40b7d083-f9c2-4114-9fea-7b205a0f2699\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.477776 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9gxm\" (UniqueName: \"kubernetes.io/projected/2e72dc34-e146-4759-92a9-472b505e452e-kube-api-access-z9gxm\") pod \"nova-operator-controller-manager-5fbbf8b6cc-brmhw\" (UID: \"2e72dc34-e146-4759-92a9-472b505e452e\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.477869 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlkrm\" (UniqueName: \"kubernetes.io/projected/7ab1c07d-8d12-4d71-b191-3334da2b04dd-kube-api-access-jlkrm\") pod \"manila-operator-controller-manager-598945d5b8-4lxms\" (UID: \"7ab1c07d-8d12-4d71-b191-3334da2b04dd\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.478032 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvk7f\" (UniqueName: \"kubernetes.io/projected/862bb25b-65e6-4866-a881-99ff200bd44c-kube-api-access-rvk7f\") pod \"neutron-operator-controller-manager-7cd87b778f-xj8gg\" (UID: \"862bb25b-65e6-4866-a881-99ff200bd44c\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.481953 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.482862 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.487339 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ss5t7" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.488952 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.499618 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlkrm\" (UniqueName: \"kubernetes.io/projected/7ab1c07d-8d12-4d71-b191-3334da2b04dd-kube-api-access-jlkrm\") pod \"manila-operator-controller-manager-598945d5b8-4lxms\" (UID: \"7ab1c07d-8d12-4d71-b191-3334da2b04dd\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.499683 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjd6w\" (UniqueName: \"kubernetes.io/projected/3a788872-b35e-4386-97a0-55b225e77f3c-kube-api-access-pjd6w\") pod \"mariadb-operator-controller-manager-7b88bfc995-cg7pq\" (UID: \"3a788872-b35e-4386-97a0-55b225e77f3c\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.505203 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvk7f\" (UniqueName: \"kubernetes.io/projected/862bb25b-65e6-4866-a881-99ff200bd44c-kube-api-access-rvk7f\") pod \"neutron-operator-controller-manager-7cd87b778f-xj8gg\" (UID: \"862bb25b-65e6-4866-a881-99ff200bd44c\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.508344 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.509249 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z5rh\" (UniqueName: \"kubernetes.io/projected/40b7d083-f9c2-4114-9fea-7b205a0f2699-kube-api-access-5z5rh\") pod \"keystone-operator-controller-manager-568985c78-dqv5l\" (UID: \"40b7d083-f9c2-4114-9fea-7b205a0f2699\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.515723 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.516518 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.521271 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.522084 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.523140 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.528782 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.529648 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.532989 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4lzxr" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.533396 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.534886 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-txz6h" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.550052 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.558662 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.584020 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.593840 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-bqnft" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.602592 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.602746 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9gxm\" (UniqueName: \"kubernetes.io/projected/2e72dc34-e146-4759-92a9-472b505e452e-kube-api-access-z9gxm\") pod \"nova-operator-controller-manager-5fbbf8b6cc-brmhw\" (UID: \"2e72dc34-e146-4759-92a9-472b505e452e\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.602808 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmwb5\" (UniqueName: \"kubernetes.io/projected/70d0025c-b385-4d1d-aaae-12d916644086-kube-api-access-hmwb5\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.613878 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.632566 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.647650 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.647730 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9gxm\" (UniqueName: \"kubernetes.io/projected/2e72dc34-e146-4759-92a9-472b505e452e-kube-api-access-z9gxm\") pod \"nova-operator-controller-manager-5fbbf8b6cc-brmhw\" (UID: \"2e72dc34-e146-4759-92a9-472b505e452e\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.657531 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.666696 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-pwpx2" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.685944 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.704331 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmwb5\" (UniqueName: \"kubernetes.io/projected/70d0025c-b385-4d1d-aaae-12d916644086-kube-api-access-hmwb5\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.705741 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.705876 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzrvg\" (UniqueName: \"kubernetes.io/projected/51008687-6437-41e7-be67-d0b8504af846-kube-api-access-qzrvg\") pod \"placement-operator-controller-manager-9b6f8f78c-gqgz4\" (UID: \"51008687-6437-41e7-be67-d0b8504af846\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.706016 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2gwg\" (UniqueName: \"kubernetes.io/projected/523f1764-4ebe-4424-911d-e9e8b9a06576-kube-api-access-h2gwg\") pod \"octavia-operator-controller-manager-68c649d9d-trszk\" (UID: \"523f1764-4ebe-4424-911d-e9e8b9a06576\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.706044 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cprnz\" (UniqueName: \"kubernetes.io/projected/f446cb2d-a8c4-460c-8538-3cd339280043-kube-api-access-cprnz\") pod \"ovn-operator-controller-manager-bf6d4f946-qc578\" (UID: \"f446cb2d-a8c4-460c-8538-3cd339280043\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578" Jan 05 22:09:07 crc kubenswrapper[5034]: E0105 22:09:07.706982 5034 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 22:09:07 crc kubenswrapper[5034]: E0105 22:09:07.707033 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert podName:70d0025c-b385-4d1d-aaae-12d916644086 nodeName:}" failed. No retries permitted until 2026-01-05 22:09:08.207015348 +0000 UTC m=+1040.579014777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert") pod "openstack-baremetal-operator-controller-manager-596c464d77km7w8" (UID: "70d0025c-b385-4d1d-aaae-12d916644086") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.707624 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.708781 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.723092 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.724064 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.731445 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nb8kl" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.735484 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmwb5\" (UniqueName: \"kubernetes.io/projected/70d0025c-b385-4d1d-aaae-12d916644086-kube-api-access-hmwb5\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.764152 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.777934 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.803046 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.809126 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzrvg\" (UniqueName: \"kubernetes.io/projected/51008687-6437-41e7-be67-d0b8504af846-kube-api-access-qzrvg\") pod \"placement-operator-controller-manager-9b6f8f78c-gqgz4\" (UID: \"51008687-6437-41e7-be67-d0b8504af846\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.809171 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mkmf\" (UniqueName: \"kubernetes.io/projected/a5129b08-723a-4f31-aeca-bfa82f192ca6-kube-api-access-8mkmf\") pod \"swift-operator-controller-manager-bb586bbf4-66brd\" (UID: \"a5129b08-723a-4f31-aeca-bfa82f192ca6\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.809198 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2gwg\" (UniqueName: \"kubernetes.io/projected/523f1764-4ebe-4424-911d-e9e8b9a06576-kube-api-access-h2gwg\") pod \"octavia-operator-controller-manager-68c649d9d-trszk\" (UID: \"523f1764-4ebe-4424-911d-e9e8b9a06576\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.809219 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cprnz\" (UniqueName: \"kubernetes.io/projected/f446cb2d-a8c4-460c-8538-3cd339280043-kube-api-access-cprnz\") pod \"ovn-operator-controller-manager-bf6d4f946-qc578\" (UID: \"f446cb2d-a8c4-460c-8538-3cd339280043\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.810286 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.811652 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.814090 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lh47g" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.823130 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.835641 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2gwg\" (UniqueName: \"kubernetes.io/projected/523f1764-4ebe-4424-911d-e9e8b9a06576-kube-api-access-h2gwg\") pod \"octavia-operator-controller-manager-68c649d9d-trszk\" (UID: \"523f1764-4ebe-4424-911d-e9e8b9a06576\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.847380 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.852930 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cprnz\" (UniqueName: \"kubernetes.io/projected/f446cb2d-a8c4-460c-8538-3cd339280043-kube-api-access-cprnz\") pod \"ovn-operator-controller-manager-bf6d4f946-qc578\" (UID: \"f446cb2d-a8c4-460c-8538-3cd339280043\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.877457 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.882975 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzrvg\" (UniqueName: \"kubernetes.io/projected/51008687-6437-41e7-be67-d0b8504af846-kube-api-access-qzrvg\") pod \"placement-operator-controller-manager-9b6f8f78c-gqgz4\" (UID: \"51008687-6437-41e7-be67-d0b8504af846\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.887813 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.889135 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.891058 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.897991 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.899676 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-q2hn5" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.910778 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-449l5\" (UniqueName: \"kubernetes.io/projected/c4c31dca-5e18-4a1f-a8ea-f10abb68d479-kube-api-access-449l5\") pod \"telemetry-operator-controller-manager-68d988df55-49jv5\" (UID: \"c4c31dca-5e18-4a1f-a8ea-f10abb68d479\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.910979 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:07 crc kubenswrapper[5034]: E0105 22:09:07.911145 5034 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 22:09:07 crc kubenswrapper[5034]: E0105 22:09:07.923947 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert podName:55117dc3-bdf7-4967-830e-8465bd939669 nodeName:}" failed. No retries permitted until 2026-01-05 22:09:08.923913857 +0000 UTC m=+1041.295913286 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert") pod "infra-operator-controller-manager-6d99759cf-qjlr7" (UID: "55117dc3-bdf7-4967-830e-8465bd939669") : secret "infra-operator-webhook-server-cert" not found Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.924493 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mkmf\" (UniqueName: \"kubernetes.io/projected/a5129b08-723a-4f31-aeca-bfa82f192ca6-kube-api-access-8mkmf\") pod \"swift-operator-controller-manager-bb586bbf4-66brd\" (UID: \"a5129b08-723a-4f31-aeca-bfa82f192ca6\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.924591 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dvtg\" (UniqueName: \"kubernetes.io/projected/7f51b169-2963-4bcc-881f-ad5e0eb8ebd7-kube-api-access-5dvtg\") pod \"test-operator-controller-manager-6c866cfdcb-b9x9f\" (UID: \"7f51b169-2963-4bcc-881f-ad5e0eb8ebd7\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.934829 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.980788 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mkmf\" (UniqueName: \"kubernetes.io/projected/a5129b08-723a-4f31-aeca-bfa82f192ca6-kube-api-access-8mkmf\") pod \"swift-operator-controller-manager-bb586bbf4-66brd\" (UID: \"a5129b08-723a-4f31-aeca-bfa82f192ca6\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.991146 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s"] Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.992229 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.995366 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.995372 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7nc7v" Jan 05 22:09:07 crc kubenswrapper[5034]: I0105 22:09:07.995796 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.001251 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.022305 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s"] Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.025469 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-449l5\" (UniqueName: \"kubernetes.io/projected/c4c31dca-5e18-4a1f-a8ea-f10abb68d479-kube-api-access-449l5\") pod \"telemetry-operator-controller-manager-68d988df55-49jv5\" (UID: \"c4c31dca-5e18-4a1f-a8ea-f10abb68d479\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.025607 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5r8\" (UniqueName: \"kubernetes.io/projected/cf91ac7d-1a00-4658-9270-8a7186602088-kube-api-access-qm5r8\") pod \"watcher-operator-controller-manager-9dbdf6486-zbzfs\" (UID: \"cf91ac7d-1a00-4658-9270-8a7186602088\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.032369 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dvtg\" (UniqueName: \"kubernetes.io/projected/7f51b169-2963-4bcc-881f-ad5e0eb8ebd7-kube-api-access-5dvtg\") pod \"test-operator-controller-manager-6c866cfdcb-b9x9f\" (UID: \"7f51b169-2963-4bcc-881f-ad5e0eb8ebd7\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.053423 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf"] Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.054348 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.060131 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6ps8t" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.067449 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf"] Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.074704 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-449l5\" (UniqueName: \"kubernetes.io/projected/c4c31dca-5e18-4a1f-a8ea-f10abb68d479-kube-api-access-449l5\") pod \"telemetry-operator-controller-manager-68d988df55-49jv5\" (UID: \"c4c31dca-5e18-4a1f-a8ea-f10abb68d479\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.105678 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dvtg\" (UniqueName: \"kubernetes.io/projected/7f51b169-2963-4bcc-881f-ad5e0eb8ebd7-kube-api-access-5dvtg\") pod \"test-operator-controller-manager-6c866cfdcb-b9x9f\" (UID: \"7f51b169-2963-4bcc-881f-ad5e0eb8ebd7\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.105749 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6"] Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.136922 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.137362 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.137390 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xgtm\" (UniqueName: \"kubernetes.io/projected/ec391381-ae2f-4f53-a3bc-42b7b47a3727-kube-api-access-5xgtm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zsckf\" (UID: \"ec391381-ae2f-4f53-a3bc-42b7b47a3727\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.137413 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6d6\" (UniqueName: \"kubernetes.io/projected/23276bd0-4dde-4a42-8c97-481788b2c35c-kube-api-access-cd6d6\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.137490 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5r8\" (UniqueName: \"kubernetes.io/projected/cf91ac7d-1a00-4658-9270-8a7186602088-kube-api-access-qm5r8\") pod \"watcher-operator-controller-manager-9dbdf6486-zbzfs\" (UID: \"cf91ac7d-1a00-4658-9270-8a7186602088\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.155140 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.225417 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5r8\" (UniqueName: \"kubernetes.io/projected/cf91ac7d-1a00-4658-9270-8a7186602088-kube-api-access-qm5r8\") pod \"watcher-operator-controller-manager-9dbdf6486-zbzfs\" (UID: \"cf91ac7d-1a00-4658-9270-8a7186602088\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.231567 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.233946 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp"] Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.263285 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.263383 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.263424 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xgtm\" (UniqueName: \"kubernetes.io/projected/ec391381-ae2f-4f53-a3bc-42b7b47a3727-kube-api-access-5xgtm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zsckf\" (UID: \"ec391381-ae2f-4f53-a3bc-42b7b47a3727\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.263501 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6d6\" (UniqueName: \"kubernetes.io/projected/23276bd0-4dde-4a42-8c97-481788b2c35c-kube-api-access-cd6d6\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.263575 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.263940 5034 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.264007 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert podName:70d0025c-b385-4d1d-aaae-12d916644086 nodeName:}" failed. No retries permitted until 2026-01-05 22:09:09.26398237 +0000 UTC m=+1041.635981809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert") pod "openstack-baremetal-operator-controller-manager-596c464d77km7w8" (UID: "70d0025c-b385-4d1d-aaae-12d916644086") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.264516 5034 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.264548 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs podName:23276bd0-4dde-4a42-8c97-481788b2c35c nodeName:}" failed. No retries permitted until 2026-01-05 22:09:08.764539065 +0000 UTC m=+1041.136538494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs") pod "openstack-operator-controller-manager-555f86cbf8-jsp5s" (UID: "23276bd0-4dde-4a42-8c97-481788b2c35c") : secret "metrics-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.264592 5034 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.264614 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs podName:23276bd0-4dde-4a42-8c97-481788b2c35c nodeName:}" failed. No retries permitted until 2026-01-05 22:09:08.764607967 +0000 UTC m=+1041.136607396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs") pod "openstack-operator-controller-manager-555f86cbf8-jsp5s" (UID: "23276bd0-4dde-4a42-8c97-481788b2c35c") : secret "webhook-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.299041 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xgtm\" (UniqueName: \"kubernetes.io/projected/ec391381-ae2f-4f53-a3bc-42b7b47a3727-kube-api-access-5xgtm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zsckf\" (UID: \"ec391381-ae2f-4f53-a3bc-42b7b47a3727\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.325032 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6d6\" (UniqueName: \"kubernetes.io/projected/23276bd0-4dde-4a42-8c97-481788b2c35c-kube-api-access-cd6d6\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.327982 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-z872k"] Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.363955 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" Jan 05 22:09:08 crc kubenswrapper[5034]: W0105 22:09:08.394124 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fd87fd1_9317_4df1_be11_c509d9643f84.slice/crio-a5d8668eb1f3e4623b14e7b96a6b9e73ea8feb348e5bacbe8a96bba8e160bc2d WatchSource:0}: Error finding container a5d8668eb1f3e4623b14e7b96a6b9e73ea8feb348e5bacbe8a96bba8e160bc2d: Status 404 returned error can't find the container with id a5d8668eb1f3e4623b14e7b96a6b9e73ea8feb348e5bacbe8a96bba8e160bc2d Jan 05 22:09:08 crc kubenswrapper[5034]: W0105 22:09:08.450745 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1649d2ab_0b0e_475a_be2b_485845105d31.slice/crio-05cd5424f019ca2c750e5312e4c31c8ce2b266e48617fc2e88460bd1649ae03a WatchSource:0}: Error finding container 05cd5424f019ca2c750e5312e4c31c8ce2b266e48617fc2e88460bd1649ae03a: Status 404 returned error can't find the container with id 05cd5424f019ca2c750e5312e4c31c8ce2b266e48617fc2e88460bd1649ae03a Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.513702 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.647871 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm"] Jan 05 22:09:08 crc kubenswrapper[5034]: W0105 22:09:08.666071 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf2fcd8_230f_414c_88dc_68ccb91b009e.slice/crio-b63b9e7ac1f35856a213b07b96420f4c952e67fce10ea1c7109301335ef58f3b WatchSource:0}: Error finding container b63b9e7ac1f35856a213b07b96420f4c952e67fce10ea1c7109301335ef58f3b: Status 404 returned error can't find the container with id b63b9e7ac1f35856a213b07b96420f4c952e67fce10ea1c7109301335ef58f3b Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.752408 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf"] Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.783714 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.783853 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.783947 5034 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.784019 5034 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.784036 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs podName:23276bd0-4dde-4a42-8c97-481788b2c35c nodeName:}" failed. No retries permitted until 2026-01-05 22:09:09.784011841 +0000 UTC m=+1042.156011380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs") pod "openstack-operator-controller-manager-555f86cbf8-jsp5s" (UID: "23276bd0-4dde-4a42-8c97-481788b2c35c") : secret "webhook-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.784096 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs podName:23276bd0-4dde-4a42-8c97-481788b2c35c nodeName:}" failed. No retries permitted until 2026-01-05 22:09:09.784056682 +0000 UTC m=+1042.156056121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs") pod "openstack-operator-controller-manager-555f86cbf8-jsp5s" (UID: "23276bd0-4dde-4a42-8c97-481788b2c35c") : secret "metrics-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.886612 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms"] Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.894660 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw"] Jan 05 22:09:08 crc kubenswrapper[5034]: I0105 22:09:08.986121 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.986339 5034 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 22:09:08 crc kubenswrapper[5034]: E0105 22:09:08.986395 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert podName:55117dc3-bdf7-4967-830e-8465bd939669 nodeName:}" failed. No retries permitted until 2026-01-05 22:09:10.986375547 +0000 UTC m=+1043.358374986 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert") pod "infra-operator-controller-manager-6d99759cf-qjlr7" (UID: "55117dc3-bdf7-4967-830e-8465bd939669") : secret "infra-operator-webhook-server-cert" not found Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.080650 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg"] Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.094356 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw"] Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.106369 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l"] Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.114858 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc"] Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.119229 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq"] Jan 05 22:09:09 crc kubenswrapper[5034]: W0105 22:09:09.121358 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cb3a336_40ad_44f2_8817_09d0d9807a1a.slice/crio-60e0ed8a43d928faa6c52fbd7b2f332c6572d3e45bbe208c7a1504a4a9793500 WatchSource:0}: Error finding container 60e0ed8a43d928faa6c52fbd7b2f332c6572d3e45bbe208c7a1504a4a9793500: Status 404 returned error can't find the container with id 60e0ed8a43d928faa6c52fbd7b2f332c6572d3e45bbe208c7a1504a4a9793500 Jan 05 22:09:09 crc kubenswrapper[5034]: W0105 22:09:09.122572 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a788872_b35e_4386_97a0_55b225e77f3c.slice/crio-42dc406104bffe639b59a729ca4f796348aae4f8100655b6fdf1877d8df5f21e WatchSource:0}: Error finding container 42dc406104bffe639b59a729ca4f796348aae4f8100655b6fdf1877d8df5f21e: Status 404 returned error can't find the container with id 42dc406104bffe639b59a729ca4f796348aae4f8100655b6fdf1877d8df5f21e Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.139836 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4"] Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.187304 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq" event={"ID":"3a788872-b35e-4386-97a0-55b225e77f3c","Type":"ContainerStarted","Data":"42dc406104bffe639b59a729ca4f796348aae4f8100655b6fdf1877d8df5f21e"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.190114 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4" event={"ID":"51008687-6437-41e7-be67-d0b8504af846","Type":"ContainerStarted","Data":"b902f6254cb19b981637c168cf3641096fadfde34089319dbbf41366fb7647ce"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.192030 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc" event={"ID":"8cb3a336-40ad-44f2-8817-09d0d9807a1a","Type":"ContainerStarted","Data":"60e0ed8a43d928faa6c52fbd7b2f332c6572d3e45bbe208c7a1504a4a9793500"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.192779 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp" event={"ID":"6fd87fd1-9317-4df1-be11-c509d9643f84","Type":"ContainerStarted","Data":"a5d8668eb1f3e4623b14e7b96a6b9e73ea8feb348e5bacbe8a96bba8e160bc2d"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.193846 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg" event={"ID":"862bb25b-65e6-4866-a881-99ff200bd44c","Type":"ContainerStarted","Data":"2c3c7443f65b763d178047877a7c473dfa12b2961e334634312023001ad7c416"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.195275 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw" event={"ID":"c0ad1066-4da0-43bb-8599-3bd8a5e445f4","Type":"ContainerStarted","Data":"4042fc1396c4f7886f7e2912c55b346256343a9062020062cf9459940fd223cf"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.196747 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" event={"ID":"2e72dc34-e146-4759-92a9-472b505e452e","Type":"ContainerStarted","Data":"83ae2a1bd00811e4db94077c8665e115a985128fb2baeafd0f5af946a6284f31"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.200700 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf" event={"ID":"896815d6-faea-4d19-ac51-a51653fcb729","Type":"ContainerStarted","Data":"cd911be567b46e5185b22d8b959afb53eec52a2476beef5d1999d55b4abd48d2"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.203296 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm" event={"ID":"eaf2fcd8-230f-414c-88dc-68ccb91b009e","Type":"ContainerStarted","Data":"b63b9e7ac1f35856a213b07b96420f4c952e67fce10ea1c7109301335ef58f3b"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.205980 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" event={"ID":"7ab1c07d-8d12-4d71-b191-3334da2b04dd","Type":"ContainerStarted","Data":"16bb98680965b239c81d079e1809435609fc1c9ddf065b29b1516091b900cb34"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.214586 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-z872k" event={"ID":"1649d2ab-0b0e-475a-be2b-485845105d31","Type":"ContainerStarted","Data":"05cd5424f019ca2c750e5312e4c31c8ce2b266e48617fc2e88460bd1649ae03a"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.219465 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6" event={"ID":"496eda61-616b-4c26-8a21-f7c32d44b301","Type":"ContainerStarted","Data":"d4594debb3d0b36a02f572541d53835608105f7400dd4a97a31a0f8b3ecbc422"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.220453 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" event={"ID":"40b7d083-f9c2-4114-9fea-7b205a0f2699","Type":"ContainerStarted","Data":"a5a6270fb4c8aaf71b1c0555c73452fc3b340fa7efdbf83d7eb2e020eb12551c"} Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.265275 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk"] Jan 05 22:09:09 crc kubenswrapper[5034]: W0105 22:09:09.277218 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod523f1764_4ebe_4424_911d_e9e8b9a06576.slice/crio-0eb592158210d5f1c60cefc7539341cab344e95cb459ddb531f41589433a552e WatchSource:0}: Error finding container 0eb592158210d5f1c60cefc7539341cab344e95cb459ddb531f41589433a552e: Status 404 returned error can't find the container with id 0eb592158210d5f1c60cefc7539341cab344e95cb459ddb531f41589433a552e Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.284564 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578"] Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.291066 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.291523 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs"] Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.291534 5034 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.291639 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert podName:70d0025c-b385-4d1d-aaae-12d916644086 nodeName:}" failed. No retries permitted until 2026-01-05 22:09:11.291620288 +0000 UTC m=+1043.663619727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert") pod "openstack-baremetal-operator-controller-manager-596c464d77km7w8" (UID: "70d0025c-b385-4d1d-aaae-12d916644086") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.303177 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qm5r8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-zbzfs_openstack-operators(cf91ac7d-1a00-4658-9270-8a7186602088): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.304470 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" podUID="cf91ac7d-1a00-4658-9270-8a7186602088" Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.305659 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-449l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-68d988df55-49jv5_openstack-operators(c4c31dca-5e18-4a1f-a8ea-f10abb68d479): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.306064 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5dvtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-b9x9f_openstack-operators(7f51b169-2963-4bcc-881f-ad5e0eb8ebd7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.307338 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" podUID="7f51b169-2963-4bcc-881f-ad5e0eb8ebd7" Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.307391 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" podUID="c4c31dca-5e18-4a1f-a8ea-f10abb68d479" Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.310238 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5"] Jan 05 22:09:09 crc kubenswrapper[5034]: W0105 22:09:09.313504 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5129b08_723a_4f31_aeca_bfa82f192ca6.slice/crio-fa2233b63cbebc7a4b033c53fd55f8425f4572717413b2ab09aa924a3f7b589d WatchSource:0}: Error finding container fa2233b63cbebc7a4b033c53fd55f8425f4572717413b2ab09aa924a3f7b589d: Status 404 returned error can't find the container with id fa2233b63cbebc7a4b033c53fd55f8425f4572717413b2ab09aa924a3f7b589d Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.315162 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f"] Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.318089 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8mkmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-66brd_openstack-operators(a5129b08-723a-4f31-aeca-bfa82f192ca6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.319299 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" podUID="a5129b08-723a-4f31-aeca-bfa82f192ca6" Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.319544 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd"] Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.416127 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf"] Jan 05 22:09:09 crc kubenswrapper[5034]: W0105 22:09:09.443189 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec391381_ae2f_4f53_a3bc_42b7b47a3727.slice/crio-76c8649b5c09e80f55ff9026db605e1bf56750353c2a1bf060688d62641ae29b WatchSource:0}: Error finding container 76c8649b5c09e80f55ff9026db605e1bf56750353c2a1bf060688d62641ae29b: Status 404 returned error can't find the container with id 76c8649b5c09e80f55ff9026db605e1bf56750353c2a1bf060688d62641ae29b Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.447356 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xgtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zsckf_openstack-operators(ec391381-ae2f-4f53-a3bc-42b7b47a3727): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.448494 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" podUID="ec391381-ae2f-4f53-a3bc-42b7b47a3727" Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.801154 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.801365 5034 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.801613 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs podName:23276bd0-4dde-4a42-8c97-481788b2c35c nodeName:}" failed. No retries permitted until 2026-01-05 22:09:11.801592603 +0000 UTC m=+1044.173592042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs") pod "openstack-operator-controller-manager-555f86cbf8-jsp5s" (UID: "23276bd0-4dde-4a42-8c97-481788b2c35c") : secret "webhook-server-cert" not found Jan 05 22:09:09 crc kubenswrapper[5034]: I0105 22:09:09.801646 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.801837 5034 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 22:09:09 crc kubenswrapper[5034]: E0105 22:09:09.801940 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs podName:23276bd0-4dde-4a42-8c97-481788b2c35c nodeName:}" failed. No retries permitted until 2026-01-05 22:09:11.801919393 +0000 UTC m=+1044.173918822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs") pod "openstack-operator-controller-manager-555f86cbf8-jsp5s" (UID: "23276bd0-4dde-4a42-8c97-481788b2c35c") : secret "metrics-server-cert" not found Jan 05 22:09:10 crc kubenswrapper[5034]: I0105 22:09:10.228680 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" event={"ID":"ec391381-ae2f-4f53-a3bc-42b7b47a3727","Type":"ContainerStarted","Data":"76c8649b5c09e80f55ff9026db605e1bf56750353c2a1bf060688d62641ae29b"} Jan 05 22:09:10 crc kubenswrapper[5034]: E0105 22:09:10.230273 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" podUID="ec391381-ae2f-4f53-a3bc-42b7b47a3727" Jan 05 22:09:10 crc kubenswrapper[5034]: I0105 22:09:10.231424 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" event={"ID":"7f51b169-2963-4bcc-881f-ad5e0eb8ebd7","Type":"ContainerStarted","Data":"9ed10d56db1b79f03677b713b680c981eb86f80f127e533fd70382edd16d5cdd"} Jan 05 22:09:10 crc kubenswrapper[5034]: E0105 22:09:10.233132 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" podUID="7f51b169-2963-4bcc-881f-ad5e0eb8ebd7" Jan 05 22:09:10 crc kubenswrapper[5034]: I0105 22:09:10.244627 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" event={"ID":"c4c31dca-5e18-4a1f-a8ea-f10abb68d479","Type":"ContainerStarted","Data":"743d2abab98133b080146f94519d49d528cd539c191f0a26ae5d8b3635efd62e"} Jan 05 22:09:10 crc kubenswrapper[5034]: I0105 22:09:10.249474 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" event={"ID":"a5129b08-723a-4f31-aeca-bfa82f192ca6","Type":"ContainerStarted","Data":"fa2233b63cbebc7a4b033c53fd55f8425f4572717413b2ab09aa924a3f7b589d"} Jan 05 22:09:10 crc kubenswrapper[5034]: E0105 22:09:10.249652 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" podUID="c4c31dca-5e18-4a1f-a8ea-f10abb68d479" Jan 05 22:09:10 crc kubenswrapper[5034]: E0105 22:09:10.250900 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" podUID="a5129b08-723a-4f31-aeca-bfa82f192ca6" Jan 05 22:09:10 crc kubenswrapper[5034]: I0105 22:09:10.251462 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" event={"ID":"cf91ac7d-1a00-4658-9270-8a7186602088","Type":"ContainerStarted","Data":"a495a94d192c34b1a39cff120ea428a72f235cd1845d231214ab50773a9dbc60"} Jan 05 22:09:10 crc kubenswrapper[5034]: E0105 22:09:10.255732 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" podUID="cf91ac7d-1a00-4658-9270-8a7186602088" Jan 05 22:09:10 crc kubenswrapper[5034]: I0105 22:09:10.281592 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk" event={"ID":"523f1764-4ebe-4424-911d-e9e8b9a06576","Type":"ContainerStarted","Data":"0eb592158210d5f1c60cefc7539341cab344e95cb459ddb531f41589433a552e"} Jan 05 22:09:10 crc kubenswrapper[5034]: I0105 22:09:10.286132 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578" event={"ID":"f446cb2d-a8c4-460c-8538-3cd339280043","Type":"ContainerStarted","Data":"f216a5a5490930565f535d30338530a60dd085d0a7de718a3437e365d6831679"} Jan 05 22:09:11 crc kubenswrapper[5034]: I0105 22:09:11.023867 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.024053 5034 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.024263 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert podName:55117dc3-bdf7-4967-830e-8465bd939669 nodeName:}" failed. No retries permitted until 2026-01-05 22:09:15.024244678 +0000 UTC m=+1047.396244117 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert") pod "infra-operator-controller-manager-6d99759cf-qjlr7" (UID: "55117dc3-bdf7-4967-830e-8465bd939669") : secret "infra-operator-webhook-server-cert" not found Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.299323 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" podUID="ec391381-ae2f-4f53-a3bc-42b7b47a3727" Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.299368 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" podUID="c4c31dca-5e18-4a1f-a8ea-f10abb68d479" Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.299596 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" podUID="7f51b169-2963-4bcc-881f-ad5e0eb8ebd7" Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.299979 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" podUID="cf91ac7d-1a00-4658-9270-8a7186602088" Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.300390 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" podUID="a5129b08-723a-4f31-aeca-bfa82f192ca6" Jan 05 22:09:11 crc kubenswrapper[5034]: I0105 22:09:11.327296 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.327473 5034 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.327564 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert podName:70d0025c-b385-4d1d-aaae-12d916644086 nodeName:}" failed. No retries permitted until 2026-01-05 22:09:15.327540585 +0000 UTC m=+1047.699540024 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert") pod "openstack-baremetal-operator-controller-manager-596c464d77km7w8" (UID: "70d0025c-b385-4d1d-aaae-12d916644086") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 22:09:11 crc kubenswrapper[5034]: I0105 22:09:11.834808 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:11 crc kubenswrapper[5034]: I0105 22:09:11.834932 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.835006 5034 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.835150 5034 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.835403 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs podName:23276bd0-4dde-4a42-8c97-481788b2c35c nodeName:}" failed. No retries permitted until 2026-01-05 22:09:15.83505471 +0000 UTC m=+1048.207054149 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs") pod "openstack-operator-controller-manager-555f86cbf8-jsp5s" (UID: "23276bd0-4dde-4a42-8c97-481788b2c35c") : secret "metrics-server-cert" not found Jan 05 22:09:11 crc kubenswrapper[5034]: E0105 22:09:11.835424 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs podName:23276bd0-4dde-4a42-8c97-481788b2c35c nodeName:}" failed. No retries permitted until 2026-01-05 22:09:15.83541714 +0000 UTC m=+1048.207416579 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs") pod "openstack-operator-controller-manager-555f86cbf8-jsp5s" (UID: "23276bd0-4dde-4a42-8c97-481788b2c35c") : secret "webhook-server-cert" not found Jan 05 22:09:15 crc kubenswrapper[5034]: I0105 22:09:15.084555 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:15 crc kubenswrapper[5034]: E0105 22:09:15.085096 5034 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 22:09:15 crc kubenswrapper[5034]: E0105 22:09:15.085159 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert podName:55117dc3-bdf7-4967-830e-8465bd939669 nodeName:}" failed. No retries permitted until 2026-01-05 22:09:23.08513682 +0000 UTC m=+1055.457136259 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert") pod "infra-operator-controller-manager-6d99759cf-qjlr7" (UID: "55117dc3-bdf7-4967-830e-8465bd939669") : secret "infra-operator-webhook-server-cert" not found Jan 05 22:09:15 crc kubenswrapper[5034]: I0105 22:09:15.389581 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:15 crc kubenswrapper[5034]: E0105 22:09:15.389695 5034 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 22:09:15 crc kubenswrapper[5034]: E0105 22:09:15.389766 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert podName:70d0025c-b385-4d1d-aaae-12d916644086 nodeName:}" failed. No retries permitted until 2026-01-05 22:09:23.389746634 +0000 UTC m=+1055.761746073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert") pod "openstack-baremetal-operator-controller-manager-596c464d77km7w8" (UID: "70d0025c-b385-4d1d-aaae-12d916644086") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 22:09:15 crc kubenswrapper[5034]: I0105 22:09:15.897037 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:15 crc kubenswrapper[5034]: I0105 22:09:15.897190 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:15 crc kubenswrapper[5034]: E0105 22:09:15.897349 5034 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 22:09:15 crc kubenswrapper[5034]: E0105 22:09:15.897390 5034 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 22:09:15 crc kubenswrapper[5034]: E0105 22:09:15.897443 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs podName:23276bd0-4dde-4a42-8c97-481788b2c35c nodeName:}" failed. No retries permitted until 2026-01-05 22:09:23.897419594 +0000 UTC m=+1056.269419033 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs") pod "openstack-operator-controller-manager-555f86cbf8-jsp5s" (UID: "23276bd0-4dde-4a42-8c97-481788b2c35c") : secret "metrics-server-cert" not found Jan 05 22:09:15 crc kubenswrapper[5034]: E0105 22:09:15.897465 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs podName:23276bd0-4dde-4a42-8c97-481788b2c35c nodeName:}" failed. No retries permitted until 2026-01-05 22:09:23.897457255 +0000 UTC m=+1056.269456774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs") pod "openstack-operator-controller-manager-555f86cbf8-jsp5s" (UID: "23276bd0-4dde-4a42-8c97-481788b2c35c") : secret "webhook-server-cert" not found Jan 05 22:09:20 crc kubenswrapper[5034]: E0105 22:09:20.589263 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c" Jan 05 22:09:20 crc kubenswrapper[5034]: E0105 22:09:20.590023 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jlkrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-598945d5b8-4lxms_openstack-operators(7ab1c07d-8d12-4d71-b191-3334da2b04dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:09:20 crc kubenswrapper[5034]: E0105 22:09:20.591314 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" podUID="7ab1c07d-8d12-4d71-b191-3334da2b04dd" Jan 05 22:09:21 crc kubenswrapper[5034]: E0105 22:09:21.361831 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" podUID="7ab1c07d-8d12-4d71-b191-3334da2b04dd" Jan 05 22:09:22 crc kubenswrapper[5034]: E0105 22:09:22.083194 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c" Jan 05 22:09:22 crc kubenswrapper[5034]: E0105 22:09:22.083371 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5z5rh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-568985c78-dqv5l_openstack-operators(40b7d083-f9c2-4114-9fea-7b205a0f2699): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:09:22 crc kubenswrapper[5034]: E0105 22:09:22.084574 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" podUID="40b7d083-f9c2-4114-9fea-7b205a0f2699" Jan 05 22:09:22 crc kubenswrapper[5034]: E0105 22:09:22.368126 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" podUID="40b7d083-f9c2-4114-9fea-7b205a0f2699" Jan 05 22:09:22 crc kubenswrapper[5034]: E0105 22:09:22.669814 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Jan 05 22:09:22 crc kubenswrapper[5034]: E0105 22:09:22.670014 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9gxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-brmhw_openstack-operators(2e72dc34-e146-4759-92a9-472b505e452e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:09:22 crc kubenswrapper[5034]: E0105 22:09:22.671453 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" podUID="2e72dc34-e146-4759-92a9-472b505e452e" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.110814 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.117145 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55117dc3-bdf7-4967-830e-8465bd939669-cert\") pod \"infra-operator-controller-manager-6d99759cf-qjlr7\" (UID: \"55117dc3-bdf7-4967-830e-8465bd939669\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.187730 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:23 crc kubenswrapper[5034]: E0105 22:09:23.375746 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" podUID="2e72dc34-e146-4759-92a9-472b505e452e" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.415022 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.419970 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70d0025c-b385-4d1d-aaae-12d916644086-cert\") pod \"openstack-baremetal-operator-controller-manager-596c464d77km7w8\" (UID: \"70d0025c-b385-4d1d-aaae-12d916644086\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.524823 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.593305 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7"] Jan 05 22:09:23 crc kubenswrapper[5034]: W0105 22:09:23.599841 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55117dc3_bdf7_4967_830e_8465bd939669.slice/crio-21ab0c0a865a052190a9e645e732eaaa1ea09262d3cd1fed417c57e71c7a8e3d WatchSource:0}: Error finding container 21ab0c0a865a052190a9e645e732eaaa1ea09262d3cd1fed417c57e71c7a8e3d: Status 404 returned error can't find the container with id 21ab0c0a865a052190a9e645e732eaaa1ea09262d3cd1fed417c57e71c7a8e3d Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.923277 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.923648 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.931950 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-webhook-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.931949 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23276bd0-4dde-4a42-8c97-481788b2c35c-metrics-certs\") pod \"openstack-operator-controller-manager-555f86cbf8-jsp5s\" (UID: \"23276bd0-4dde-4a42-8c97-481788b2c35c\") " pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:23 crc kubenswrapper[5034]: I0105 22:09:23.968306 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8"] Jan 05 22:09:23 crc kubenswrapper[5034]: W0105 22:09:23.971400 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70d0025c_b385_4d1d_aaae_12d916644086.slice/crio-a5c338dc574ac2b238250b57dd02e3d3107c3fa0dc33231f405e404f943e3e90 WatchSource:0}: Error finding container a5c338dc574ac2b238250b57dd02e3d3107c3fa0dc33231f405e404f943e3e90: Status 404 returned error can't find the container with id a5c338dc574ac2b238250b57dd02e3d3107c3fa0dc33231f405e404f943e3e90 Jan 05 22:09:24 crc kubenswrapper[5034]: I0105 22:09:24.019438 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:24 crc kubenswrapper[5034]: I0105 22:09:24.250000 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s"] Jan 05 22:09:24 crc kubenswrapper[5034]: W0105 22:09:24.282438 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23276bd0_4dde_4a42_8c97_481788b2c35c.slice/crio-01efe764e0b056647177f02d2ed6a9c6c2c6ff9ea09601c2a846c1a9855446db WatchSource:0}: Error finding container 01efe764e0b056647177f02d2ed6a9c6c2c6ff9ea09601c2a846c1a9855446db: Status 404 returned error can't find the container with id 01efe764e0b056647177f02d2ed6a9c6c2c6ff9ea09601c2a846c1a9855446db Jan 05 22:09:24 crc kubenswrapper[5034]: I0105 22:09:24.382359 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" event={"ID":"55117dc3-bdf7-4967-830e-8465bd939669","Type":"ContainerStarted","Data":"21ab0c0a865a052190a9e645e732eaaa1ea09262d3cd1fed417c57e71c7a8e3d"} Jan 05 22:09:24 crc kubenswrapper[5034]: I0105 22:09:24.383413 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" event={"ID":"23276bd0-4dde-4a42-8c97-481788b2c35c","Type":"ContainerStarted","Data":"01efe764e0b056647177f02d2ed6a9c6c2c6ff9ea09601c2a846c1a9855446db"} Jan 05 22:09:24 crc kubenswrapper[5034]: I0105 22:09:24.384687 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" event={"ID":"70d0025c-b385-4d1d-aaae-12d916644086","Type":"ContainerStarted","Data":"a5c338dc574ac2b238250b57dd02e3d3107c3fa0dc33231f405e404f943e3e90"} Jan 05 22:09:27 crc kubenswrapper[5034]: I0105 22:09:27.403150 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw" event={"ID":"c0ad1066-4da0-43bb-8599-3bd8a5e445f4","Type":"ContainerStarted","Data":"64e45360ae9451474750a1217a7664f69c0b3fedb2d4f790351dc8f77239e269"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.415798 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578" event={"ID":"f446cb2d-a8c4-460c-8538-3cd339280043","Type":"ContainerStarted","Data":"0ecf909e9acff51cf7903c2e36a2ce5dd5f9cf008a23c517d9767e0a84ffbcfb"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.424603 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" event={"ID":"23276bd0-4dde-4a42-8c97-481788b2c35c","Type":"ContainerStarted","Data":"3932152e71994291700be655e6319f235f6c3b75c10b431081367cab8336f961"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.430927 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-z872k" event={"ID":"1649d2ab-0b0e-475a-be2b-485845105d31","Type":"ContainerStarted","Data":"9f5c99b8d0cfe985c65815b44d37438e6781d5880ed61726fb4db1f5c31cde05"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.432011 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp" event={"ID":"6fd87fd1-9317-4df1-be11-c509d9643f84","Type":"ContainerStarted","Data":"2ab8d3dc6584537eafe6e9ff457550546ea16baa9aa2092aa26092ff79509a8a"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.432991 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg" event={"ID":"862bb25b-65e6-4866-a881-99ff200bd44c","Type":"ContainerStarted","Data":"bd31ca576579bd2f37f36092cb3fe9b8db72f7150ebfe76d5119cd0244d7b69d"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.433866 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm" event={"ID":"eaf2fcd8-230f-414c-88dc-68ccb91b009e","Type":"ContainerStarted","Data":"6c9e1104e7c4d6d36f6fb9b63248caece2a2047c51ad5bc1bb5290d529f64d3c"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.437311 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf" event={"ID":"896815d6-faea-4d19-ac51-a51653fcb729","Type":"ContainerStarted","Data":"e582b7d99febd85ad49f43e846202757ec80c938ce40bd260530a34841fdf458"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.437592 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf" Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.441296 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4" event={"ID":"51008687-6437-41e7-be67-d0b8504af846","Type":"ContainerStarted","Data":"11abc1c507df1d6863685f36857951d72991198aa26aa57e2518684e827f4979"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.442759 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk" event={"ID":"523f1764-4ebe-4424-911d-e9e8b9a06576","Type":"ContainerStarted","Data":"a8f10e8b96bb0d5c6e5ec99c4330fa02f362987d4c608fad0865083b007eee49"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.442910 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk" Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.451022 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6" event={"ID":"496eda61-616b-4c26-8a21-f7c32d44b301","Type":"ContainerStarted","Data":"2d88b498e6d84810a1822e2bc314b5b5f0b201c26d883636ab9ab72c49fa4913"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.451111 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6" Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.455159 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf" podStartSLOduration=8.574861537 podStartE2EDuration="22.455138847s" podCreationTimestamp="2026-01-05 22:09:06 +0000 UTC" firstStartedPulling="2026-01-05 22:09:08.76325189 +0000 UTC m=+1041.135251329" lastFinishedPulling="2026-01-05 22:09:22.6435292 +0000 UTC m=+1055.015528639" observedRunningTime="2026-01-05 22:09:28.452829501 +0000 UTC m=+1060.824828940" watchObservedRunningTime="2026-01-05 22:09:28.455138847 +0000 UTC m=+1060.827138286" Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.456011 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc" event={"ID":"8cb3a336-40ad-44f2-8817-09d0d9807a1a","Type":"ContainerStarted","Data":"a6430ba109643c3f88beb27eceb5c648e8bbb455416166cb33cc45fbc66de1a8"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.456122 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc" Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.460834 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq" event={"ID":"3a788872-b35e-4386-97a0-55b225e77f3c","Type":"ContainerStarted","Data":"3abda0beb9fc59702a4fef4c1b52508455235ae89fe946d825f56b275b915771"} Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.460895 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw" Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.474154 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6" podStartSLOduration=8.12602723 podStartE2EDuration="22.474135397s" podCreationTimestamp="2026-01-05 22:09:06 +0000 UTC" firstStartedPulling="2026-01-05 22:09:08.299090498 +0000 UTC m=+1040.671089937" lastFinishedPulling="2026-01-05 22:09:22.647198665 +0000 UTC m=+1055.019198104" observedRunningTime="2026-01-05 22:09:28.473383596 +0000 UTC m=+1060.845383035" watchObservedRunningTime="2026-01-05 22:09:28.474135397 +0000 UTC m=+1060.846134836" Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.504653 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk" podStartSLOduration=8.137978384 podStartE2EDuration="21.504634715s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.280495322 +0000 UTC m=+1041.652494761" lastFinishedPulling="2026-01-05 22:09:22.647151653 +0000 UTC m=+1055.019151092" observedRunningTime="2026-01-05 22:09:28.498447879 +0000 UTC m=+1060.870447318" watchObservedRunningTime="2026-01-05 22:09:28.504634715 +0000 UTC m=+1060.876634154" Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.515984 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc" podStartSLOduration=7.996981123 podStartE2EDuration="21.515967087s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.128239251 +0000 UTC m=+1041.500238690" lastFinishedPulling="2026-01-05 22:09:22.647225215 +0000 UTC m=+1055.019224654" observedRunningTime="2026-01-05 22:09:28.514488345 +0000 UTC m=+1060.886487784" watchObservedRunningTime="2026-01-05 22:09:28.515967087 +0000 UTC m=+1060.887966526" Jan 05 22:09:28 crc kubenswrapper[5034]: I0105 22:09:28.545786 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw" podStartSLOduration=7.800297999 podStartE2EDuration="21.545769035s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:08.898397384 +0000 UTC m=+1041.270396823" lastFinishedPulling="2026-01-05 22:09:22.64386842 +0000 UTC m=+1055.015867859" observedRunningTime="2026-01-05 22:09:28.543604143 +0000 UTC m=+1060.915603582" watchObservedRunningTime="2026-01-05 22:09:28.545769035 +0000 UTC m=+1060.917768474" Jan 05 22:09:29 crc kubenswrapper[5034]: I0105 22:09:29.467086 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg" Jan 05 22:09:29 crc kubenswrapper[5034]: I0105 22:09:29.496664 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" podStartSLOduration=22.49664417 podStartE2EDuration="22.49664417s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:09:29.49345598 +0000 UTC m=+1061.865455439" watchObservedRunningTime="2026-01-05 22:09:29.49664417 +0000 UTC m=+1061.868643609" Jan 05 22:09:29 crc kubenswrapper[5034]: I0105 22:09:29.520734 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp" podStartSLOduration=9.263567526 podStartE2EDuration="23.520713925s" podCreationTimestamp="2026-01-05 22:09:06 +0000 UTC" firstStartedPulling="2026-01-05 22:09:08.406277677 +0000 UTC m=+1040.778277126" lastFinishedPulling="2026-01-05 22:09:22.663424086 +0000 UTC m=+1055.035423525" observedRunningTime="2026-01-05 22:09:29.515439145 +0000 UTC m=+1061.887438584" watchObservedRunningTime="2026-01-05 22:09:29.520713925 +0000 UTC m=+1061.892713364" Jan 05 22:09:29 crc kubenswrapper[5034]: I0105 22:09:29.541846 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4" podStartSLOduration=9.045168277 podStartE2EDuration="22.541826176s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.150548306 +0000 UTC m=+1041.522547745" lastFinishedPulling="2026-01-05 22:09:22.647206205 +0000 UTC m=+1055.019205644" observedRunningTime="2026-01-05 22:09:29.533981432 +0000 UTC m=+1061.905980881" watchObservedRunningTime="2026-01-05 22:09:29.541826176 +0000 UTC m=+1061.913825635" Jan 05 22:09:29 crc kubenswrapper[5034]: I0105 22:09:29.559813 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578" podStartSLOduration=9.217802017 podStartE2EDuration="22.559792587s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.301838709 +0000 UTC m=+1041.673838148" lastFinishedPulling="2026-01-05 22:09:22.643829279 +0000 UTC m=+1055.015828718" observedRunningTime="2026-01-05 22:09:29.551798339 +0000 UTC m=+1061.923797788" watchObservedRunningTime="2026-01-05 22:09:29.559792587 +0000 UTC m=+1061.931792026" Jan 05 22:09:29 crc kubenswrapper[5034]: I0105 22:09:29.574860 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg" podStartSLOduration=9.021936086 podStartE2EDuration="22.574840285s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.090049455 +0000 UTC m=+1041.462048894" lastFinishedPulling="2026-01-05 22:09:22.642953654 +0000 UTC m=+1055.014953093" observedRunningTime="2026-01-05 22:09:29.573807005 +0000 UTC m=+1061.945806444" watchObservedRunningTime="2026-01-05 22:09:29.574840285 +0000 UTC m=+1061.946839724" Jan 05 22:09:29 crc kubenswrapper[5034]: I0105 22:09:29.617992 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm" podStartSLOduration=8.64200165 podStartE2EDuration="22.617975551s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:08.667704223 +0000 UTC m=+1041.039703662" lastFinishedPulling="2026-01-05 22:09:22.643678124 +0000 UTC m=+1055.015677563" observedRunningTime="2026-01-05 22:09:29.588102742 +0000 UTC m=+1061.960102201" watchObservedRunningTime="2026-01-05 22:09:29.617975551 +0000 UTC m=+1061.989974990" Jan 05 22:09:29 crc kubenswrapper[5034]: I0105 22:09:29.672022 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-z872k" podStartSLOduration=8.500818353 podStartE2EDuration="22.672000748s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:08.476052301 +0000 UTC m=+1040.848051740" lastFinishedPulling="2026-01-05 22:09:22.647234696 +0000 UTC m=+1055.019234135" observedRunningTime="2026-01-05 22:09:29.633935455 +0000 UTC m=+1062.005934894" watchObservedRunningTime="2026-01-05 22:09:29.672000748 +0000 UTC m=+1062.044000197" Jan 05 22:09:29 crc kubenswrapper[5034]: I0105 22:09:29.688020 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq" podStartSLOduration=9.168472794 podStartE2EDuration="22.687997043s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.128253062 +0000 UTC m=+1041.500252501" lastFinishedPulling="2026-01-05 22:09:22.647777311 +0000 UTC m=+1055.019776750" observedRunningTime="2026-01-05 22:09:29.669888148 +0000 UTC m=+1062.041887587" watchObservedRunningTime="2026-01-05 22:09:29.687997043 +0000 UTC m=+1062.059996482" Jan 05 22:09:31 crc kubenswrapper[5034]: I0105 22:09:31.482938 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" event={"ID":"7f51b169-2963-4bcc-881f-ad5e0eb8ebd7","Type":"ContainerStarted","Data":"233a705c8432b497fa93f4888d339f81fbe34ca7e920fab3cfcfa87df812bdd9"} Jan 05 22:09:31 crc kubenswrapper[5034]: I0105 22:09:31.484204 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" Jan 05 22:09:31 crc kubenswrapper[5034]: I0105 22:09:31.513716 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" podStartSLOduration=3.398341437 podStartE2EDuration="24.513698281s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.305906995 +0000 UTC m=+1041.677906434" lastFinishedPulling="2026-01-05 22:09:30.421263839 +0000 UTC m=+1062.793263278" observedRunningTime="2026-01-05 22:09:31.508551195 +0000 UTC m=+1063.880550624" watchObservedRunningTime="2026-01-05 22:09:31.513698281 +0000 UTC m=+1063.885697720" Jan 05 22:09:34 crc kubenswrapper[5034]: I0105 22:09:34.019958 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:34 crc kubenswrapper[5034]: I0105 22:09:34.027864 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-555f86cbf8-jsp5s" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.311717 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.314733 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-svkbp" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.334814 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-kgxlf" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.340832 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jvpk6" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.451252 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-z872k" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.456708 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-z872k" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.490309 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.493833 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xjmnm" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.523638 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.533781 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-cg7pq" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.540902 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-k2nrc" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.552785 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" event={"ID":"70d0025c-b385-4d1d-aaae-12d916644086","Type":"ContainerStarted","Data":"c02e9c3b774bb47402c0304e6df2ed4c807e264e27379d1e2981baee0660d680"} Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.555391 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" event={"ID":"2e72dc34-e146-4759-92a9-472b505e452e","Type":"ContainerStarted","Data":"a4a03278f9ce60625b349c535a5804d4a362756de3b472b620f2e1f72fa5e3c1"} Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.556256 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.560691 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.575372 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" event={"ID":"55117dc3-bdf7-4967-830e-8465bd939669","Type":"ContainerStarted","Data":"f526607203eeefabd4fa61417945b1b086d25025a22b6dc251e48236c32b32b1"} Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.579148 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.602486 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" event={"ID":"40b7d083-f9c2-4114-9fea-7b205a0f2699","Type":"ContainerStarted","Data":"96bfbb0550dbf0ed10b7bddcf53c1816860e310fbbc8a09e78349c9c2ef521e1"} Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.603479 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.633031 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-w9zgw" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.638481 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" event={"ID":"cf91ac7d-1a00-4658-9270-8a7186602088","Type":"ContainerStarted","Data":"375f331234fd4100f7c3497c27db45b96092c343a8e896cceb7f2320f0bfd01c"} Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.640403 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" event={"ID":"a5129b08-723a-4f31-aeca-bfa82f192ca6","Type":"ContainerStarted","Data":"e31536eb4f7540f5d53125a6202d96fddbf3e36bf23475e939e95b8569ff4e63"} Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.640910 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.642701 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" event={"ID":"7ab1c07d-8d12-4d71-b191-3334da2b04dd","Type":"ContainerStarted","Data":"abf886d7521fa94ba54b457c8ce8b158c1b52246982cb15710c1983577d32795"} Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.643373 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.644922 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" event={"ID":"ec391381-ae2f-4f53-a3bc-42b7b47a3727","Type":"ContainerStarted","Data":"43a21a08df8da2b2015b311fb9726d53ce4a223bc139b290838364bc24ebec3a"} Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.647471 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" event={"ID":"c4c31dca-5e18-4a1f-a8ea-f10abb68d479","Type":"ContainerStarted","Data":"ecb6b807db7585007bf99f75615da57327a4fbd78ef98190c7bb7977ec0f9af7"} Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.662515 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" podStartSLOduration=3.2930758239999998 podStartE2EDuration="30.662498749s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.105566237 +0000 UTC m=+1041.477565666" lastFinishedPulling="2026-01-05 22:09:36.474989152 +0000 UTC m=+1068.846988591" observedRunningTime="2026-01-05 22:09:37.662418126 +0000 UTC m=+1070.034417565" watchObservedRunningTime="2026-01-05 22:09:37.662498749 +0000 UTC m=+1070.034498188" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.717762 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" podStartSLOduration=18.216791311 podStartE2EDuration="30.71773977s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:23.973498088 +0000 UTC m=+1056.345497527" lastFinishedPulling="2026-01-05 22:09:36.474446557 +0000 UTC m=+1068.846445986" observedRunningTime="2026-01-05 22:09:37.714264041 +0000 UTC m=+1070.086263480" watchObservedRunningTime="2026-01-05 22:09:37.71773977 +0000 UTC m=+1070.089739209" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.746157 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" podStartSLOduration=3.390980547 podStartE2EDuration="30.746139857s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.122655573 +0000 UTC m=+1041.494655012" lastFinishedPulling="2026-01-05 22:09:36.477814883 +0000 UTC m=+1068.849814322" observedRunningTime="2026-01-05 22:09:37.737156352 +0000 UTC m=+1070.109155791" watchObservedRunningTime="2026-01-05 22:09:37.746139857 +0000 UTC m=+1070.118139286" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.762456 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zsckf" podStartSLOduration=3.74515107 podStartE2EDuration="30.762440031s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.447194813 +0000 UTC m=+1041.819194252" lastFinishedPulling="2026-01-05 22:09:36.464483774 +0000 UTC m=+1068.836483213" observedRunningTime="2026-01-05 22:09:37.759742694 +0000 UTC m=+1070.131742133" watchObservedRunningTime="2026-01-05 22:09:37.762440031 +0000 UTC m=+1070.134439470" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.775232 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" podStartSLOduration=3.618112427 podStartE2EDuration="30.775216424s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.317915166 +0000 UTC m=+1041.689914605" lastFinishedPulling="2026-01-05 22:09:36.475019163 +0000 UTC m=+1068.847018602" observedRunningTime="2026-01-05 22:09:37.773393103 +0000 UTC m=+1070.145392552" watchObservedRunningTime="2026-01-05 22:09:37.775216424 +0000 UTC m=+1070.147215863" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.804366 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" podStartSLOduration=3.195534449 podStartE2EDuration="30.804339123s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:08.868744871 +0000 UTC m=+1041.240744310" lastFinishedPulling="2026-01-05 22:09:36.477549545 +0000 UTC m=+1068.849548984" observedRunningTime="2026-01-05 22:09:37.80141302 +0000 UTC m=+1070.173412469" watchObservedRunningTime="2026-01-05 22:09:37.804339123 +0000 UTC m=+1070.176338582" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.806234 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xj8gg" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.851810 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" podStartSLOduration=3.6795090740000003 podStartE2EDuration="30.851793273s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.305511023 +0000 UTC m=+1041.677510462" lastFinishedPulling="2026-01-05 22:09:36.477795222 +0000 UTC m=+1068.849794661" observedRunningTime="2026-01-05 22:09:37.826586636 +0000 UTC m=+1070.198586075" watchObservedRunningTime="2026-01-05 22:09:37.851793273 +0000 UTC m=+1070.223792712" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.865012 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" podStartSLOduration=3.693529662 podStartE2EDuration="30.864992708s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:09.303040843 +0000 UTC m=+1041.675040282" lastFinishedPulling="2026-01-05 22:09:36.474503889 +0000 UTC m=+1068.846503328" observedRunningTime="2026-01-05 22:09:37.858765491 +0000 UTC m=+1070.230764930" watchObservedRunningTime="2026-01-05 22:09:37.864992708 +0000 UTC m=+1070.236992147" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.872996 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" podStartSLOduration=17.998926374 podStartE2EDuration="30.872978695s" podCreationTimestamp="2026-01-05 22:09:07 +0000 UTC" firstStartedPulling="2026-01-05 22:09:23.603264498 +0000 UTC m=+1055.975263937" lastFinishedPulling="2026-01-05 22:09:36.477316819 +0000 UTC m=+1068.849316258" observedRunningTime="2026-01-05 22:09:37.871563785 +0000 UTC m=+1070.243563224" watchObservedRunningTime="2026-01-05 22:09:37.872978695 +0000 UTC m=+1070.244978134" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.884551 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-trszk" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.892451 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.906344 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-qc578" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.936584 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4" Jan 05 22:09:37 crc kubenswrapper[5034]: I0105 22:09:37.939808 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-gqgz4" Jan 05 22:09:38 crc kubenswrapper[5034]: I0105 22:09:38.157155 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-b9x9f" Jan 05 22:09:38 crc kubenswrapper[5034]: I0105 22:09:38.232445 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" Jan 05 22:09:38 crc kubenswrapper[5034]: I0105 22:09:38.364686 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" Jan 05 22:09:43 crc kubenswrapper[5034]: I0105 22:09:43.193804 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qjlr7" Jan 05 22:09:43 crc kubenswrapper[5034]: I0105 22:09:43.530518 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-596c464d77km7w8" Jan 05 22:09:47 crc kubenswrapper[5034]: I0105 22:09:47.712039 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-dqv5l" Jan 05 22:09:47 crc kubenswrapper[5034]: I0105 22:09:47.781316 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-4lxms" Jan 05 22:09:47 crc kubenswrapper[5034]: I0105 22:09:47.850286 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-brmhw" Jan 05 22:09:48 crc kubenswrapper[5034]: I0105 22:09:48.004102 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-66brd" Jan 05 22:09:48 crc kubenswrapper[5034]: I0105 22:09:48.235000 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-zbzfs" Jan 05 22:09:48 crc kubenswrapper[5034]: I0105 22:09:48.368175 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-49jv5" Jan 05 22:09:50 crc kubenswrapper[5034]: I0105 22:09:50.468421 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:09:50 crc kubenswrapper[5034]: I0105 22:09:50.468759 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.380800 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-cw7vg"] Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.390819 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.394810 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.395018 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fb7zm" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.395291 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.395891 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-cw7vg"] Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.396843 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.424116 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcw95\" (UniqueName: \"kubernetes.io/projected/379a92a9-c928-439f-9516-caa432d42fcc-kube-api-access-mcw95\") pod \"dnsmasq-dns-84bb9d8bd9-cw7vg\" (UID: \"379a92a9-c928-439f-9516-caa432d42fcc\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.424176 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379a92a9-c928-439f-9516-caa432d42fcc-config\") pod \"dnsmasq-dns-84bb9d8bd9-cw7vg\" (UID: \"379a92a9-c928-439f-9516-caa432d42fcc\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.467518 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-mm8cp"] Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.468955 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.471460 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.502755 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-mm8cp"] Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.526435 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcw95\" (UniqueName: \"kubernetes.io/projected/379a92a9-c928-439f-9516-caa432d42fcc-kube-api-access-mcw95\") pod \"dnsmasq-dns-84bb9d8bd9-cw7vg\" (UID: \"379a92a9-c928-439f-9516-caa432d42fcc\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.526509 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379a92a9-c928-439f-9516-caa432d42fcc-config\") pod \"dnsmasq-dns-84bb9d8bd9-cw7vg\" (UID: \"379a92a9-c928-439f-9516-caa432d42fcc\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.527592 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379a92a9-c928-439f-9516-caa432d42fcc-config\") pod \"dnsmasq-dns-84bb9d8bd9-cw7vg\" (UID: \"379a92a9-c928-439f-9516-caa432d42fcc\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.571487 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcw95\" (UniqueName: \"kubernetes.io/projected/379a92a9-c928-439f-9516-caa432d42fcc-kube-api-access-mcw95\") pod \"dnsmasq-dns-84bb9d8bd9-cw7vg\" (UID: \"379a92a9-c928-439f-9516-caa432d42fcc\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.627682 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9crc\" (UniqueName: \"kubernetes.io/projected/1c9673a7-0213-4e3a-abf5-b2c90968560a-kube-api-access-k9crc\") pod \"dnsmasq-dns-5f854695bc-mm8cp\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.627954 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-config\") pod \"dnsmasq-dns-5f854695bc-mm8cp\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.628048 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-dns-svc\") pod \"dnsmasq-dns-5f854695bc-mm8cp\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.715783 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.729801 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9crc\" (UniqueName: \"kubernetes.io/projected/1c9673a7-0213-4e3a-abf5-b2c90968560a-kube-api-access-k9crc\") pod \"dnsmasq-dns-5f854695bc-mm8cp\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.729865 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-config\") pod \"dnsmasq-dns-5f854695bc-mm8cp\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.729906 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-dns-svc\") pod \"dnsmasq-dns-5f854695bc-mm8cp\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.730736 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-dns-svc\") pod \"dnsmasq-dns-5f854695bc-mm8cp\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.731730 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-config\") pod \"dnsmasq-dns-5f854695bc-mm8cp\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.757712 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9crc\" (UniqueName: \"kubernetes.io/projected/1c9673a7-0213-4e3a-abf5-b2c90968560a-kube-api-access-k9crc\") pod \"dnsmasq-dns-5f854695bc-mm8cp\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:08 crc kubenswrapper[5034]: I0105 22:10:08.796544 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.118233 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-mm8cp"] Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.144448 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-mrzlh"] Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.146067 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.155820 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-mrzlh"] Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.189623 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-cw7vg"] Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.216219 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.341696 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7mrs\" (UniqueName: \"kubernetes.io/projected/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-kube-api-access-p7mrs\") pod \"dnsmasq-dns-744ffd65bc-mrzlh\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.341740 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-config\") pod \"dnsmasq-dns-744ffd65bc-mrzlh\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.341766 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-mrzlh\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.443347 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7mrs\" (UniqueName: \"kubernetes.io/projected/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-kube-api-access-p7mrs\") pod \"dnsmasq-dns-744ffd65bc-mrzlh\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.443405 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-config\") pod \"dnsmasq-dns-744ffd65bc-mrzlh\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.443442 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-mrzlh\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.444937 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-mrzlh\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.444944 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-config\") pod \"dnsmasq-dns-744ffd65bc-mrzlh\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.466783 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7mrs\" (UniqueName: \"kubernetes.io/projected/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-kube-api-access-p7mrs\") pod \"dnsmasq-dns-744ffd65bc-mrzlh\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.477296 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-mm8cp"] Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.477518 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.746325 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-mrzlh"] Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.820091 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-cw7vg"] Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.859034 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-dbhcb"] Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.860490 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.865891 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-dbhcb"] Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.867137 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" event={"ID":"1c9673a7-0213-4e3a-abf5-b2c90968560a","Type":"ContainerStarted","Data":"921381cf05f774cd7f623567a8ad2c0e511cfc2288a6aba062829d9c8a404b3d"} Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.876261 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" event={"ID":"379a92a9-c928-439f-9516-caa432d42fcc","Type":"ContainerStarted","Data":"52155d5abc0a0ecbe8ae06279c2543a35891d70e7704c1b9e5248b38a390fc8c"} Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.880636 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" event={"ID":"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58","Type":"ContainerStarted","Data":"beb37deed6fe67b8af2e16eb93810bd5c7ca6228d9a526378fc69814f54828f2"} Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.950627 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pqzb\" (UniqueName: \"kubernetes.io/projected/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-kube-api-access-6pqzb\") pod \"dnsmasq-dns-95f5f6995-dbhcb\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.950765 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-dns-svc\") pod \"dnsmasq-dns-95f5f6995-dbhcb\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:09 crc kubenswrapper[5034]: I0105 22:10:09.950823 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-config\") pod \"dnsmasq-dns-95f5f6995-dbhcb\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.051806 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pqzb\" (UniqueName: \"kubernetes.io/projected/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-kube-api-access-6pqzb\") pod \"dnsmasq-dns-95f5f6995-dbhcb\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.051896 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-dns-svc\") pod \"dnsmasq-dns-95f5f6995-dbhcb\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.051922 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-config\") pod \"dnsmasq-dns-95f5f6995-dbhcb\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.052731 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-config\") pod \"dnsmasq-dns-95f5f6995-dbhcb\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.052799 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-dns-svc\") pod \"dnsmasq-dns-95f5f6995-dbhcb\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.073260 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pqzb\" (UniqueName: \"kubernetes.io/projected/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-kube-api-access-6pqzb\") pod \"dnsmasq-dns-95f5f6995-dbhcb\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.185273 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.305635 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.306995 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.310734 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.310848 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.310956 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.311021 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.311118 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.311170 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6thb4" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.312693 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.323066 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.456711 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.456785 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.456830 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.456856 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-server-conf\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.456891 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjjlb\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-kube-api-access-kjjlb\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.456921 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.456950 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.456978 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.456996 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.457041 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94526d3f-1e21-4eef-abb7-5cd05bfb1670-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.457127 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94526d3f-1e21-4eef-abb7-5cd05bfb1670-pod-info\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559044 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjjlb\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-kube-api-access-kjjlb\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559131 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559176 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559204 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559237 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559274 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94526d3f-1e21-4eef-abb7-5cd05bfb1670-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559293 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94526d3f-1e21-4eef-abb7-5cd05bfb1670-pod-info\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559367 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559423 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559451 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559492 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-server-conf\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559618 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.559994 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.560172 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.560516 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.560584 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.562275 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-server-conf\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.563512 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94526d3f-1e21-4eef-abb7-5cd05bfb1670-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.563844 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.566896 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.574278 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94526d3f-1e21-4eef-abb7-5cd05bfb1670-pod-info\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.586104 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.600091 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjjlb\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-kube-api-access-kjjlb\") pod \"rabbitmq-server-0\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.633905 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.695174 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-dbhcb"] Jan 05 22:10:10 crc kubenswrapper[5034]: W0105 22:10:10.698375 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fad3d7d_2f02_4dcc_9cbe_72a13438bcda.slice/crio-4005c7611f2b8fbf1a49827dd0bb3661933dd9c54c96eb5c573f674693de1135 WatchSource:0}: Error finding container 4005c7611f2b8fbf1a49827dd0bb3661933dd9c54c96eb5c573f674693de1135: Status 404 returned error can't find the container with id 4005c7611f2b8fbf1a49827dd0bb3661933dd9c54c96eb5c573f674693de1135 Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.888435 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" event={"ID":"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda","Type":"ContainerStarted","Data":"4005c7611f2b8fbf1a49827dd0bb3661933dd9c54c96eb5c573f674693de1135"} Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.971527 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.973119 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.977364 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.977671 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.978378 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5jqzr" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.978644 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.978805 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.978954 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.979194 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 05 22:10:10 crc kubenswrapper[5034]: I0105 22:10:10.987927 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.082445 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.175673 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.175745 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.175820 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.175847 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.175884 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.175906 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.175932 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrsht\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-kube-api-access-nrsht\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.175983 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.176007 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.176177 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.176227 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.278069 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.278411 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.278542 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.278655 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.278953 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrsht\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-kube-api-access-nrsht\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.279067 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.279195 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.279347 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.279441 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.279540 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.279644 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.278471 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.280009 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.280242 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.280580 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.281506 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.281657 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.286542 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.289286 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.296247 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.296727 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.310193 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrsht\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-kube-api-access-nrsht\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.332534 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.594541 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:10:11 crc kubenswrapper[5034]: I0105 22:10:11.918729 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94526d3f-1e21-4eef-abb7-5cd05bfb1670","Type":"ContainerStarted","Data":"56b403a81f5e53425e61c41468fd91f9f162fe94a0fcd2b29ebbf4b18ee6b855"} Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.148128 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 22:10:12 crc kubenswrapper[5034]: W0105 22:10:12.271475 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65a6b236_e04b_494a_a18e_5d1a8a5ae02a.slice/crio-3fb09f69dbeb5387bee2bc2e11251f87022c70b7241105fe9a0b4a447d121ae2 WatchSource:0}: Error finding container 3fb09f69dbeb5387bee2bc2e11251f87022c70b7241105fe9a0b4a447d121ae2: Status 404 returned error can't find the container with id 3fb09f69dbeb5387bee2bc2e11251f87022c70b7241105fe9a0b4a447d121ae2 Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.443066 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.450351 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.459300 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-j5gxt" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.459516 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.459706 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.459813 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.466564 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.469112 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.611968 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kolla-config\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.612019 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-default\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.612049 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9twp\" (UniqueName: \"kubernetes.io/projected/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kube-api-access-v9twp\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.612244 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.612333 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.612615 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.612665 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.612723 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.714204 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.714280 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.714370 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kolla-config\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.714430 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-default\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.714460 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9twp\" (UniqueName: \"kubernetes.io/projected/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kube-api-access-v9twp\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.714488 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.714527 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.714568 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.714782 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.714922 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.715908 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-default\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.717286 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.722759 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kolla-config\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.730555 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.732990 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9twp\" (UniqueName: \"kubernetes.io/projected/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kube-api-access-v9twp\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.744746 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.762270 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.782522 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 05 22:10:12 crc kubenswrapper[5034]: I0105 22:10:12.953315 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a6b236-e04b-494a-a18e-5d1a8a5ae02a","Type":"ContainerStarted","Data":"3fb09f69dbeb5387bee2bc2e11251f87022c70b7241105fe9a0b4a447d121ae2"} Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.231711 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 22:10:13 crc kubenswrapper[5034]: W0105 22:10:13.241036 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c1a7050_af42_4822_9bcb_cc8ea32bd319.slice/crio-3bfa3d426b16f1d3678ba6270c6462dca7f0d00813fdb0d14f66a92185e7b3ab WatchSource:0}: Error finding container 3bfa3d426b16f1d3678ba6270c6462dca7f0d00813fdb0d14f66a92185e7b3ab: Status 404 returned error can't find the container with id 3bfa3d426b16f1d3678ba6270c6462dca7f0d00813fdb0d14f66a92185e7b3ab Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.829360 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.830850 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.837142 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.837498 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.837628 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.837811 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-m5jv7" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.898067 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.940488 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.940555 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.940617 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.940682 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.940727 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxw7\" (UniqueName: \"kubernetes.io/projected/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kube-api-access-vhxw7\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.940798 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.940832 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:13 crc kubenswrapper[5034]: I0105 22:10:13.940853 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.034564 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8c1a7050-af42-4822-9bcb-cc8ea32bd319","Type":"ContainerStarted","Data":"3bfa3d426b16f1d3678ba6270c6462dca7f0d00813fdb0d14f66a92185e7b3ab"} Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.056842 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.056949 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.057250 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.057387 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.057448 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhxw7\" (UniqueName: \"kubernetes.io/projected/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kube-api-access-vhxw7\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.057523 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.057554 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.057588 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.058257 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.059006 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.061430 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.063819 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.070184 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.096475 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.096692 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhxw7\" (UniqueName: \"kubernetes.io/projected/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kube-api-access-vhxw7\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.132289 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.142396 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.160811 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.218230 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.219429 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.224196 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.224384 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-b8hqt" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.224916 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.243684 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.269775 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.269813 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.269845 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kolla-config\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.269943 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mzs7\" (UniqueName: \"kubernetes.io/projected/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kube-api-access-8mzs7\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.269994 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-config-data\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.371037 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mzs7\" (UniqueName: \"kubernetes.io/projected/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kube-api-access-8mzs7\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.371162 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-config-data\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.371227 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.371247 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.371281 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kolla-config\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.372309 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kolla-config\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.374585 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-config-data\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.390717 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.390726 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.399774 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mzs7\" (UniqueName: \"kubernetes.io/projected/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kube-api-access-8mzs7\") pod \"memcached-0\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.602896 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 05 22:10:14 crc kubenswrapper[5034]: I0105 22:10:14.736982 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 22:10:15 crc kubenswrapper[5034]: I0105 22:10:15.064718 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bb0c349d-e74e-49eb-ba86-8a435d15ba66","Type":"ContainerStarted","Data":"905d0940dbba86815450e0a93ce49dc3d120d14cad9d7610c63e16aa8f3dae83"} Jan 05 22:10:15 crc kubenswrapper[5034]: I0105 22:10:15.268903 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 05 22:10:15 crc kubenswrapper[5034]: I0105 22:10:15.676246 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:10:15 crc kubenswrapper[5034]: I0105 22:10:15.677936 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 22:10:15 crc kubenswrapper[5034]: I0105 22:10:15.682283 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h726s" Jan 05 22:10:15 crc kubenswrapper[5034]: I0105 22:10:15.698492 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:10:15 crc kubenswrapper[5034]: I0105 22:10:15.805558 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrzdg\" (UniqueName: \"kubernetes.io/projected/5dc94453-3c0f-4b4c-a23e-f2c88e41325c-kube-api-access-lrzdg\") pod \"kube-state-metrics-0\" (UID: \"5dc94453-3c0f-4b4c-a23e-f2c88e41325c\") " pod="openstack/kube-state-metrics-0" Jan 05 22:10:15 crc kubenswrapper[5034]: I0105 22:10:15.908257 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrzdg\" (UniqueName: \"kubernetes.io/projected/5dc94453-3c0f-4b4c-a23e-f2c88e41325c-kube-api-access-lrzdg\") pod \"kube-state-metrics-0\" (UID: \"5dc94453-3c0f-4b4c-a23e-f2c88e41325c\") " pod="openstack/kube-state-metrics-0" Jan 05 22:10:15 crc kubenswrapper[5034]: I0105 22:10:15.943506 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrzdg\" (UniqueName: \"kubernetes.io/projected/5dc94453-3c0f-4b4c-a23e-f2c88e41325c-kube-api-access-lrzdg\") pod \"kube-state-metrics-0\" (UID: \"5dc94453-3c0f-4b4c-a23e-f2c88e41325c\") " pod="openstack/kube-state-metrics-0" Jan 05 22:10:16 crc kubenswrapper[5034]: I0105 22:10:16.021416 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 22:10:16 crc kubenswrapper[5034]: I0105 22:10:16.115725 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"44fc54fc-2187-4b43-8e20-e8c84b8f54d3","Type":"ContainerStarted","Data":"723246246b690f3edcaccfe81b8acf9cfe22551ea39ee72470a131e85a9a455c"} Jan 05 22:10:16 crc kubenswrapper[5034]: I0105 22:10:16.793832 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:10:16 crc kubenswrapper[5034]: W0105 22:10:16.844308 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dc94453_3c0f_4b4c_a23e_f2c88e41325c.slice/crio-2355cd83acf22f29b5bfd6e6328495f2124a5aba3ef0e1054f2463e56e07e20b WatchSource:0}: Error finding container 2355cd83acf22f29b5bfd6e6328495f2124a5aba3ef0e1054f2463e56e07e20b: Status 404 returned error can't find the container with id 2355cd83acf22f29b5bfd6e6328495f2124a5aba3ef0e1054f2463e56e07e20b Jan 05 22:10:17 crc kubenswrapper[5034]: I0105 22:10:17.177004 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5dc94453-3c0f-4b4c-a23e-f2c88e41325c","Type":"ContainerStarted","Data":"2355cd83acf22f29b5bfd6e6328495f2124a5aba3ef0e1054f2463e56e07e20b"} Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.266102 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4gbcl"] Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.267874 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.274883 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4gbcl"] Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.285191 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-z8hj9" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.285553 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.285759 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.317809 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-v4mvr"] Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.319484 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.329530 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-v4mvr"] Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.409797 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7kw8\" (UniqueName: \"kubernetes.io/projected/8174d3dc-0931-484a-850f-3649234ef9fc-kube-api-access-g7kw8\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.409874 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjb4l\" (UniqueName: \"kubernetes.io/projected/a4f67d51-b26b-44be-beba-ea5874fe6375-kube-api-access-wjb4l\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410027 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-lib\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410078 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run-ovn\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410126 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f67d51-b26b-44be-beba-ea5874fe6375-scripts\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410308 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-ovn-controller-tls-certs\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410390 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410435 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-etc-ovs\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410506 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8174d3dc-0931-484a-850f-3649234ef9fc-scripts\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410539 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-log\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410626 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-log-ovn\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410659 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-run\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.410678 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-combined-ca-bundle\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.512690 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-run\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.512735 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-combined-ca-bundle\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.512765 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7kw8\" (UniqueName: \"kubernetes.io/projected/8174d3dc-0931-484a-850f-3649234ef9fc-kube-api-access-g7kw8\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.512786 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjb4l\" (UniqueName: \"kubernetes.io/projected/a4f67d51-b26b-44be-beba-ea5874fe6375-kube-api-access-wjb4l\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.512817 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-lib\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.512836 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run-ovn\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.512857 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f67d51-b26b-44be-beba-ea5874fe6375-scripts\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.512895 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-ovn-controller-tls-certs\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513050 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513072 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-etc-ovs\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513118 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8174d3dc-0931-484a-850f-3649234ef9fc-scripts\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513140 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-log\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513177 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-log-ovn\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513352 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-run\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513506 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-lib\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513537 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-log-ovn\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513629 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-log\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513669 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513722 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-etc-ovs\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.513811 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run-ovn\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.518961 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8174d3dc-0931-484a-850f-3649234ef9fc-scripts\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.523706 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f67d51-b26b-44be-beba-ea5874fe6375-scripts\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.524648 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-combined-ca-bundle\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.532241 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-ovn-controller-tls-certs\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.534754 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7kw8\" (UniqueName: \"kubernetes.io/projected/8174d3dc-0931-484a-850f-3649234ef9fc-kube-api-access-g7kw8\") pod \"ovn-controller-4gbcl\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.537164 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjb4l\" (UniqueName: \"kubernetes.io/projected/a4f67d51-b26b-44be-beba-ea5874fe6375-kube-api-access-wjb4l\") pod \"ovn-controller-ovs-v4mvr\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.623372 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:19 crc kubenswrapper[5034]: I0105 22:10:19.649665 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:20 crc kubenswrapper[5034]: I0105 22:10:20.468892 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:10:20 crc kubenswrapper[5034]: I0105 22:10:20.468943 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.606682 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.608280 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.611270 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.614010 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.614303 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.614516 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.614312 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2595k" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.637577 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.755295 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.755372 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.755603 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.755679 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9px\" (UniqueName: \"kubernetes.io/projected/434da13f-30c5-4464-9b48-3d93ec7762d0-kube-api-access-fw9px\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.755767 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-config\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.755964 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.756031 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.756116 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.858391 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.858462 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.858530 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.858588 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9px\" (UniqueName: \"kubernetes.io/projected/434da13f-30c5-4464-9b48-3d93ec7762d0-kube-api-access-fw9px\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.858627 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-config\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.858698 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.858748 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.858782 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.858940 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.858963 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.859735 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-config\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.867959 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.867970 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.870124 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.875351 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.879280 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9px\" (UniqueName: \"kubernetes.io/projected/434da13f-30c5-4464-9b48-3d93ec7762d0-kube-api-access-fw9px\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:21 crc kubenswrapper[5034]: I0105 22:10:21.881840 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.382070 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.383801 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.387100 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.387345 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-h8vq9" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.387492 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.387637 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.391035 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.471505 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.471577 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.471648 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.471723 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.471744 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.471784 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhll\" (UniqueName: \"kubernetes.io/projected/d8f99f63-df74-4392-a5fc-bf090571266f-kube-api-access-rnhll\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.472734 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.472797 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-config\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.538834 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.574729 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.574789 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.574832 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhll\" (UniqueName: \"kubernetes.io/projected/d8f99f63-df74-4392-a5fc-bf090571266f-kube-api-access-rnhll\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.574874 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.574919 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-config\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.574987 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.575019 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.575073 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.575580 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.575849 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.576945 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-config\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.576988 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.580676 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.585974 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.592313 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.598716 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhll\" (UniqueName: \"kubernetes.io/projected/d8f99f63-df74-4392-a5fc-bf090571266f-kube-api-access-rnhll\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.600733 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:22 crc kubenswrapper[5034]: I0105 22:10:22.710457 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:36 crc kubenswrapper[5034]: E0105 22:10:36.125506 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 05 22:10:36 crc kubenswrapper[5034]: E0105 22:10:36.126531 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vhxw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(bb0c349d-e74e-49eb-ba86-8a435d15ba66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:10:36 crc kubenswrapper[5034]: E0105 22:10:36.127730 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" Jan 05 22:10:36 crc kubenswrapper[5034]: E0105 22:10:36.142221 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 05 22:10:36 crc kubenswrapper[5034]: E0105 22:10:36.142382 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9twp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(8c1a7050-af42-4822-9bcb-cc8ea32bd319): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:10:36 crc kubenswrapper[5034]: E0105 22:10:36.143531 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" Jan 05 22:10:36 crc kubenswrapper[5034]: E0105 22:10:36.347848 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" Jan 05 22:10:36 crc kubenswrapper[5034]: E0105 22:10:36.348151 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.111781 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.112011 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjjlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(94526d3f-1e21-4eef-abb7-5cd05bfb1670): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.113186 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.115866 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.116109 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrsht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(65a6b236-e04b-494a-a18e-5d1a8a5ae02a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.117267 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.355990 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.356440 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.931277 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.931486 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n79h588h684h54dh5bbh544h66h5dch8ch5b6h5c9hddh66dhffh666h7dh598hbdhd7h696h577hc4h64hch667h7h646h6ch56h94h667h5b8q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mzs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(44fc54fc-2187-4b43-8e20-e8c84b8f54d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:10:37 crc kubenswrapper[5034]: E0105 22:10:37.932876 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="44fc54fc-2187-4b43-8e20-e8c84b8f54d3" Jan 05 22:10:38 crc kubenswrapper[5034]: E0105 22:10:38.363256 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="44fc54fc-2187-4b43-8e20-e8c84b8f54d3" Jan 05 22:10:42 crc kubenswrapper[5034]: I0105 22:10:42.236378 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 22:10:42 crc kubenswrapper[5034]: I0105 22:10:42.312060 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.490925 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.491318 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9crc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-mm8cp_openstack(1c9673a7-0213-4e3a-abf5-b2c90968560a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.492491 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" podUID="1c9673a7-0213-4e3a-abf5-b2c90968560a" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.498430 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.498582 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7mrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-mrzlh_openstack(8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.499726 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" podUID="8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.504139 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.504281 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcw95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-cw7vg_openstack(379a92a9-c928-439f-9516-caa432d42fcc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.505409 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" podUID="379a92a9-c928-439f-9516-caa432d42fcc" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.516132 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.516312 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pqzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-dbhcb_openstack(1fad3d7d-2f02-4dcc-9cbe-72a13438bcda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:10:42 crc kubenswrapper[5034]: E0105 22:10:42.517499 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" podUID="1fad3d7d-2f02-4dcc-9cbe-72a13438bcda" Jan 05 22:10:42 crc kubenswrapper[5034]: W0105 22:10:42.546252 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8f99f63_df74_4392_a5fc_bf090571266f.slice/crio-7b1e6d0e3da9cefaef92129e0975e76fe0a76f33fef0b719f39837796a04b3dd WatchSource:0}: Error finding container 7b1e6d0e3da9cefaef92129e0975e76fe0a76f33fef0b719f39837796a04b3dd: Status 404 returned error can't find the container with id 7b1e6d0e3da9cefaef92129e0975e76fe0a76f33fef0b719f39837796a04b3dd Jan 05 22:10:42 crc kubenswrapper[5034]: W0105 22:10:42.548582 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod434da13f_30c5_4464_9b48_3d93ec7762d0.slice/crio-26b277e2719d275282d27a67562c85870055ba822efb900a1481e1687f335ed3 WatchSource:0}: Error finding container 26b277e2719d275282d27a67562c85870055ba822efb900a1481e1687f335ed3: Status 404 returned error can't find the container with id 26b277e2719d275282d27a67562c85870055ba822efb900a1481e1687f335ed3 Jan 05 22:10:42 crc kubenswrapper[5034]: I0105 22:10:42.851402 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4gbcl"] Jan 05 22:10:43 crc kubenswrapper[5034]: W0105 22:10:43.134382 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8174d3dc_0931_484a_850f_3649234ef9fc.slice/crio-c3c5d34d6b66b0088f1ab0fef83d966d895f6283f80d360c3db0d7dbe3ab7b3c WatchSource:0}: Error finding container c3c5d34d6b66b0088f1ab0fef83d966d895f6283f80d360c3db0d7dbe3ab7b3c: Status 404 returned error can't find the container with id c3c5d34d6b66b0088f1ab0fef83d966d895f6283f80d360c3db0d7dbe3ab7b3c Jan 05 22:10:43 crc kubenswrapper[5034]: E0105 22:10:43.139675 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 05 22:10:43 crc kubenswrapper[5034]: E0105 22:10:43.139726 5034 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 05 22:10:43 crc kubenswrapper[5034]: E0105 22:10:43.139880 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrzdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(5dc94453-3c0f-4b4c-a23e-f2c88e41325c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 22:10:43 crc kubenswrapper[5034]: E0105 22:10:43.141037 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="5dc94453-3c0f-4b4c-a23e-f2c88e41325c" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.382589 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-v4mvr"] Jan 05 22:10:43 crc kubenswrapper[5034]: W0105 22:10:43.394666 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4f67d51_b26b_44be_beba_ea5874fe6375.slice/crio-9c06874e396c32c6e21fbb89dd0184f544483db04ff7c516fb36ef304d6c5577 WatchSource:0}: Error finding container 9c06874e396c32c6e21fbb89dd0184f544483db04ff7c516fb36ef304d6c5577: Status 404 returned error can't find the container with id 9c06874e396c32c6e21fbb89dd0184f544483db04ff7c516fb36ef304d6c5577 Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.403359 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gbcl" event={"ID":"8174d3dc-0931-484a-850f-3649234ef9fc","Type":"ContainerStarted","Data":"c3c5d34d6b66b0088f1ab0fef83d966d895f6283f80d360c3db0d7dbe3ab7b3c"} Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.405001 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"434da13f-30c5-4464-9b48-3d93ec7762d0","Type":"ContainerStarted","Data":"26b277e2719d275282d27a67562c85870055ba822efb900a1481e1687f335ed3"} Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.406417 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8f99f63-df74-4392-a5fc-bf090571266f","Type":"ContainerStarted","Data":"7b1e6d0e3da9cefaef92129e0975e76fe0a76f33fef0b719f39837796a04b3dd"} Jan 05 22:10:43 crc kubenswrapper[5034]: E0105 22:10:43.407760 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="5dc94453-3c0f-4b4c-a23e-f2c88e41325c" Jan 05 22:10:43 crc kubenswrapper[5034]: E0105 22:10:43.410538 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" podUID="1fad3d7d-2f02-4dcc-9cbe-72a13438bcda" Jan 05 22:10:43 crc kubenswrapper[5034]: E0105 22:10:43.410713 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" podUID="8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.785769 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.792946 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.942305 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379a92a9-c928-439f-9516-caa432d42fcc-config\") pod \"379a92a9-c928-439f-9516-caa432d42fcc\" (UID: \"379a92a9-c928-439f-9516-caa432d42fcc\") " Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.942545 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-config\") pod \"1c9673a7-0213-4e3a-abf5-b2c90968560a\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.942678 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-dns-svc\") pod \"1c9673a7-0213-4e3a-abf5-b2c90968560a\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.942742 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9crc\" (UniqueName: \"kubernetes.io/projected/1c9673a7-0213-4e3a-abf5-b2c90968560a-kube-api-access-k9crc\") pod \"1c9673a7-0213-4e3a-abf5-b2c90968560a\" (UID: \"1c9673a7-0213-4e3a-abf5-b2c90968560a\") " Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.942821 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcw95\" (UniqueName: \"kubernetes.io/projected/379a92a9-c928-439f-9516-caa432d42fcc-kube-api-access-mcw95\") pod \"379a92a9-c928-439f-9516-caa432d42fcc\" (UID: \"379a92a9-c928-439f-9516-caa432d42fcc\") " Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.942850 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379a92a9-c928-439f-9516-caa432d42fcc-config" (OuterVolumeSpecName: "config") pod "379a92a9-c928-439f-9516-caa432d42fcc" (UID: "379a92a9-c928-439f-9516-caa432d42fcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.943547 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c9673a7-0213-4e3a-abf5-b2c90968560a" (UID: "1c9673a7-0213-4e3a-abf5-b2c90968560a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.943730 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-config" (OuterVolumeSpecName: "config") pod "1c9673a7-0213-4e3a-abf5-b2c90968560a" (UID: "1c9673a7-0213-4e3a-abf5-b2c90968560a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.944658 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379a92a9-c928-439f-9516-caa432d42fcc-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.944677 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.944687 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c9673a7-0213-4e3a-abf5-b2c90968560a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.948881 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9673a7-0213-4e3a-abf5-b2c90968560a-kube-api-access-k9crc" (OuterVolumeSpecName: "kube-api-access-k9crc") pod "1c9673a7-0213-4e3a-abf5-b2c90968560a" (UID: "1c9673a7-0213-4e3a-abf5-b2c90968560a"). InnerVolumeSpecName "kube-api-access-k9crc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:10:43 crc kubenswrapper[5034]: I0105 22:10:43.963214 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379a92a9-c928-439f-9516-caa432d42fcc-kube-api-access-mcw95" (OuterVolumeSpecName: "kube-api-access-mcw95") pod "379a92a9-c928-439f-9516-caa432d42fcc" (UID: "379a92a9-c928-439f-9516-caa432d42fcc"). InnerVolumeSpecName "kube-api-access-mcw95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.045740 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9crc\" (UniqueName: \"kubernetes.io/projected/1c9673a7-0213-4e3a-abf5-b2c90968560a-kube-api-access-k9crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.045770 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcw95\" (UniqueName: \"kubernetes.io/projected/379a92a9-c928-439f-9516-caa432d42fcc-kube-api-access-mcw95\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.420280 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v4mvr" event={"ID":"a4f67d51-b26b-44be-beba-ea5874fe6375","Type":"ContainerStarted","Data":"9c06874e396c32c6e21fbb89dd0184f544483db04ff7c516fb36ef304d6c5577"} Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.421286 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" event={"ID":"1c9673a7-0213-4e3a-abf5-b2c90968560a","Type":"ContainerDied","Data":"921381cf05f774cd7f623567a8ad2c0e511cfc2288a6aba062829d9c8a404b3d"} Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.421340 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-mm8cp" Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.426872 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" event={"ID":"379a92a9-c928-439f-9516-caa432d42fcc","Type":"ContainerDied","Data":"52155d5abc0a0ecbe8ae06279c2543a35891d70e7704c1b9e5248b38a390fc8c"} Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.426963 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-cw7vg" Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.489911 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-mm8cp"] Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.499163 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-mm8cp"] Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.512905 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-cw7vg"] Jan 05 22:10:44 crc kubenswrapper[5034]: I0105 22:10:44.522158 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-cw7vg"] Jan 05 22:10:45 crc kubenswrapper[5034]: I0105 22:10:45.849485 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9673a7-0213-4e3a-abf5-b2c90968560a" path="/var/lib/kubelet/pods/1c9673a7-0213-4e3a-abf5-b2c90968560a/volumes" Jan 05 22:10:45 crc kubenswrapper[5034]: I0105 22:10:45.850232 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379a92a9-c928-439f-9516-caa432d42fcc" path="/var/lib/kubelet/pods/379a92a9-c928-439f-9516-caa432d42fcc/volumes" Jan 05 22:10:46 crc kubenswrapper[5034]: I0105 22:10:46.441893 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v4mvr" event={"ID":"a4f67d51-b26b-44be-beba-ea5874fe6375","Type":"ContainerStarted","Data":"0faf886a752e27ec56ab34de4590b1ff5b59d045df96fc71a7ef8c57630f88d1"} Jan 05 22:10:46 crc kubenswrapper[5034]: I0105 22:10:46.443147 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"434da13f-30c5-4464-9b48-3d93ec7762d0","Type":"ContainerStarted","Data":"7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5"} Jan 05 22:10:46 crc kubenswrapper[5034]: I0105 22:10:46.444567 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8f99f63-df74-4392-a5fc-bf090571266f","Type":"ContainerStarted","Data":"2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7"} Jan 05 22:10:46 crc kubenswrapper[5034]: I0105 22:10:46.445821 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gbcl" event={"ID":"8174d3dc-0931-484a-850f-3649234ef9fc","Type":"ContainerStarted","Data":"c5a11337c879be24ab652f59ee08fa45eea5831a7d805b7ba9723c0e7a770afa"} Jan 05 22:10:46 crc kubenswrapper[5034]: I0105 22:10:46.446139 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4gbcl" Jan 05 22:10:46 crc kubenswrapper[5034]: I0105 22:10:46.479643 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4gbcl" podStartSLOduration=24.681122652 podStartE2EDuration="27.479627578s" podCreationTimestamp="2026-01-05 22:10:19 +0000 UTC" firstStartedPulling="2026-01-05 22:10:43.139349162 +0000 UTC m=+1135.511348591" lastFinishedPulling="2026-01-05 22:10:45.937854078 +0000 UTC m=+1138.309853517" observedRunningTime="2026-01-05 22:10:46.478041513 +0000 UTC m=+1138.850040952" watchObservedRunningTime="2026-01-05 22:10:46.479627578 +0000 UTC m=+1138.851627017" Jan 05 22:10:47 crc kubenswrapper[5034]: I0105 22:10:47.459848 5034 generic.go:334] "Generic (PLEG): container finished" podID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerID="0faf886a752e27ec56ab34de4590b1ff5b59d045df96fc71a7ef8c57630f88d1" exitCode=0 Jan 05 22:10:47 crc kubenswrapper[5034]: I0105 22:10:47.459951 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v4mvr" event={"ID":"a4f67d51-b26b-44be-beba-ea5874fe6375","Type":"ContainerDied","Data":"0faf886a752e27ec56ab34de4590b1ff5b59d045df96fc71a7ef8c57630f88d1"} Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.474108 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v4mvr" event={"ID":"a4f67d51-b26b-44be-beba-ea5874fe6375","Type":"ContainerStarted","Data":"493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8"} Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.474862 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v4mvr" event={"ID":"a4f67d51-b26b-44be-beba-ea5874fe6375","Type":"ContainerStarted","Data":"578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb"} Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.474885 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.474898 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.475936 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"434da13f-30c5-4464-9b48-3d93ec7762d0","Type":"ContainerStarted","Data":"1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833"} Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.477498 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8f99f63-df74-4392-a5fc-bf090571266f","Type":"ContainerStarted","Data":"f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b"} Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.501086 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-v4mvr" podStartSLOduration=27.969651396 podStartE2EDuration="30.501069456s" podCreationTimestamp="2026-01-05 22:10:19 +0000 UTC" firstStartedPulling="2026-01-05 22:10:43.396774884 +0000 UTC m=+1135.768774323" lastFinishedPulling="2026-01-05 22:10:45.928192944 +0000 UTC m=+1138.300192383" observedRunningTime="2026-01-05 22:10:49.497636768 +0000 UTC m=+1141.869636217" watchObservedRunningTime="2026-01-05 22:10:49.501069456 +0000 UTC m=+1141.873068895" Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.537647 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.255236128 podStartE2EDuration="29.537624295s" podCreationTimestamp="2026-01-05 22:10:20 +0000 UTC" firstStartedPulling="2026-01-05 22:10:42.557281576 +0000 UTC m=+1134.929281005" lastFinishedPulling="2026-01-05 22:10:48.839669743 +0000 UTC m=+1141.211669172" observedRunningTime="2026-01-05 22:10:49.52687136 +0000 UTC m=+1141.898870869" watchObservedRunningTime="2026-01-05 22:10:49.537624295 +0000 UTC m=+1141.909623754" Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.539387 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.555826 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.284861881 podStartE2EDuration="28.555805123s" podCreationTimestamp="2026-01-05 22:10:21 +0000 UTC" firstStartedPulling="2026-01-05 22:10:42.548739583 +0000 UTC m=+1134.920739022" lastFinishedPulling="2026-01-05 22:10:48.819682825 +0000 UTC m=+1141.191682264" observedRunningTime="2026-01-05 22:10:49.546846998 +0000 UTC m=+1141.918846457" watchObservedRunningTime="2026-01-05 22:10:49.555805123 +0000 UTC m=+1141.927804562" Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.584043 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.710775 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:49 crc kubenswrapper[5034]: I0105 22:10:49.744499 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:50 crc kubenswrapper[5034]: I0105 22:10:50.468705 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:10:50 crc kubenswrapper[5034]: I0105 22:10:50.468758 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:10:50 crc kubenswrapper[5034]: I0105 22:10:50.468801 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:10:50 crc kubenswrapper[5034]: I0105 22:10:50.469513 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7aa61e9f5aaa409d4332d17291c1246e891073205f554c85d6e919f6906d1cd4"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:10:50 crc kubenswrapper[5034]: I0105 22:10:50.469568 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://7aa61e9f5aaa409d4332d17291c1246e891073205f554c85d6e919f6906d1cd4" gracePeriod=600 Jan 05 22:10:50 crc kubenswrapper[5034]: I0105 22:10:50.483981 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:50 crc kubenswrapper[5034]: I0105 22:10:50.484273 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:51 crc kubenswrapper[5034]: I0105 22:10:51.494798 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="7aa61e9f5aaa409d4332d17291c1246e891073205f554c85d6e919f6906d1cd4" exitCode=0 Jan 05 22:10:51 crc kubenswrapper[5034]: I0105 22:10:51.495876 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"7aa61e9f5aaa409d4332d17291c1246e891073205f554c85d6e919f6906d1cd4"} Jan 05 22:10:51 crc kubenswrapper[5034]: I0105 22:10:51.495935 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"45da2bec73ffc166cc700c72e797b90c9621bfbc99e0234553fa898f473409e8"} Jan 05 22:10:51 crc kubenswrapper[5034]: I0105 22:10:51.495962 5034 scope.go:117] "RemoveContainer" containerID="c2ae88310c27c8bb417de34e2de1e513ef4f2cf46c667d74e4ed38e85d67a96f" Jan 05 22:10:51 crc kubenswrapper[5034]: I0105 22:10:51.557518 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 05 22:10:51 crc kubenswrapper[5034]: I0105 22:10:51.558733 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 05 22:10:51 crc kubenswrapper[5034]: I0105 22:10:51.869961 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-mrzlh"] Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.044059 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xzj87"] Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.048265 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.057911 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.067173 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-794868bd45-r74gm"] Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.068986 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.074548 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.107136 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-r74gm"] Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.123849 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xzj87"] Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.143628 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-dbhcb"] Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.182347 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.183910 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.187840 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.188514 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-d4fcz" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.188865 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.189419 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.226686 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovs-rundir\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.226724 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.226747 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovn-rundir\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.226815 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.226856 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqdlg\" (UniqueName: \"kubernetes.io/projected/88903e3e-9598-4699-97e2-19bfc4287037-kube-api-access-fqdlg\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.226879 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-dns-svc\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.226905 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-config\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.226983 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9317f553-2101-4507-8f08-52e23105b5c1-config\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.227018 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-combined-ca-bundle\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.227040 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6s2n\" (UniqueName: \"kubernetes.io/projected/9317f553-2101-4507-8f08-52e23105b5c1-kube-api-access-c6s2n\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.238304 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-zfqq7"] Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.240075 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.242789 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.262704 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.282450 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-zfqq7"] Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340583 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340638 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqdlg\" (UniqueName: \"kubernetes.io/projected/88903e3e-9598-4699-97e2-19bfc4287037-kube-api-access-fqdlg\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340672 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-dns-svc\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340710 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-config\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340737 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-config\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340776 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340810 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340837 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340857 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340885 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9317f553-2101-4507-8f08-52e23105b5c1-config\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340935 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ld8q\" (UniqueName: \"kubernetes.io/projected/eda1f147-b2fb-4349-ba17-674073870a4b-kube-api-access-2ld8q\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340963 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-combined-ca-bundle\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.340987 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6s2n\" (UniqueName: \"kubernetes.io/projected/9317f553-2101-4507-8f08-52e23105b5c1-kube-api-access-c6s2n\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.341010 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.341075 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovs-rundir\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.341119 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.341147 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovn-rundir\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.341181 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.341219 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-config\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.341262 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-scripts\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.341289 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbgg\" (UniqueName: \"kubernetes.io/projected/79846781-e528-4b43-aacd-cbc32085ca10-kube-api-access-ntbgg\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.341318 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.342664 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovn-rundir\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.342965 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovs-rundir\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.343392 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-config\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.344017 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.344178 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-dns-svc\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.344279 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9317f553-2101-4507-8f08-52e23105b5c1-config\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.360591 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.362055 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-combined-ca-bundle\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.366970 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6s2n\" (UniqueName: \"kubernetes.io/projected/9317f553-2101-4507-8f08-52e23105b5c1-kube-api-access-c6s2n\") pod \"ovn-controller-metrics-xzj87\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.367835 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqdlg\" (UniqueName: \"kubernetes.io/projected/88903e3e-9598-4699-97e2-19bfc4287037-kube-api-access-fqdlg\") pod \"dnsmasq-dns-794868bd45-r74gm\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.414033 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.424345 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444516 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-config\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444584 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-scripts\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444604 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntbgg\" (UniqueName: \"kubernetes.io/projected/79846781-e528-4b43-aacd-cbc32085ca10-kube-api-access-ntbgg\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444655 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444697 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-config\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444730 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444756 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444771 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444786 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444821 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ld8q\" (UniqueName: \"kubernetes.io/projected/eda1f147-b2fb-4349-ba17-674073870a4b-kube-api-access-2ld8q\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444845 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.444900 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.445605 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-config\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.445930 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.446452 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.447202 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-scripts\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.449324 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.449685 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.449716 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.450515 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-config\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.453894 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.456320 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.465853 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ld8q\" (UniqueName: \"kubernetes.io/projected/eda1f147-b2fb-4349-ba17-674073870a4b-kube-api-access-2ld8q\") pod \"ovn-northd-0\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.475253 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntbgg\" (UniqueName: \"kubernetes.io/projected/79846781-e528-4b43-aacd-cbc32085ca10-kube-api-access-ntbgg\") pod \"dnsmasq-dns-757dc6fff9-zfqq7\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.509968 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8c1a7050-af42-4822-9bcb-cc8ea32bd319","Type":"ContainerStarted","Data":"376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109"} Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.511511 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"44fc54fc-2187-4b43-8e20-e8c84b8f54d3","Type":"ContainerStarted","Data":"663dddc4729d62810041b4ac300dd6293f55ca190f90f1a3e6f6b67eea444427"} Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.512004 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.513190 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bb0c349d-e74e-49eb-ba86-8a435d15ba66","Type":"ContainerStarted","Data":"fe2064fa3b2b7e941e1493c9cb05377a7cb7976cfac4b8c759f921b2f44f5d59"} Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.540768 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.561274 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:52 crc kubenswrapper[5034]: I0105 22:10:52.575158 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.275110077 podStartE2EDuration="38.575131679s" podCreationTimestamp="2026-01-05 22:10:14 +0000 UTC" firstStartedPulling="2026-01-05 22:10:15.292662243 +0000 UTC m=+1107.664661682" lastFinishedPulling="2026-01-05 22:10:51.592683845 +0000 UTC m=+1143.964683284" observedRunningTime="2026-01-05 22:10:52.573529083 +0000 UTC m=+1144.945528522" watchObservedRunningTime="2026-01-05 22:10:52.575131679 +0000 UTC m=+1144.947131118" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.095432 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.113246 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.118023 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-r74gm"] Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.157521 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xzj87"] Jan 05 22:10:53 crc kubenswrapper[5034]: W0105 22:10:53.234046 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9317f553_2101_4507_8f08_52e23105b5c1.slice/crio-30e2967276fa8076789279ab9e5775ebc6a45cf363737fe4c6fbb99b755de869 WatchSource:0}: Error finding container 30e2967276fa8076789279ab9e5775ebc6a45cf363737fe4c6fbb99b755de869: Status 404 returned error can't find the container with id 30e2967276fa8076789279ab9e5775ebc6a45cf363737fe4c6fbb99b755de869 Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.264875 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7mrs\" (UniqueName: \"kubernetes.io/projected/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-kube-api-access-p7mrs\") pod \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.265610 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-config\") pod \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.265656 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-dns-svc\") pod \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.265726 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-dns-svc\") pod \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.265794 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-config\") pod \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\" (UID: \"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58\") " Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.265871 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pqzb\" (UniqueName: \"kubernetes.io/projected/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-kube-api-access-6pqzb\") pod \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\" (UID: \"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda\") " Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.267702 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58" (UID: "8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.268457 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fad3d7d-2f02-4dcc-9cbe-72a13438bcda" (UID: "1fad3d7d-2f02-4dcc-9cbe-72a13438bcda"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.268800 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-config" (OuterVolumeSpecName: "config") pod "1fad3d7d-2f02-4dcc-9cbe-72a13438bcda" (UID: "1fad3d7d-2f02-4dcc-9cbe-72a13438bcda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.269178 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-config" (OuterVolumeSpecName: "config") pod "8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58" (UID: "8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.291375 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-kube-api-access-p7mrs" (OuterVolumeSpecName: "kube-api-access-p7mrs") pod "8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58" (UID: "8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58"). InnerVolumeSpecName "kube-api-access-p7mrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.308908 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-kube-api-access-6pqzb" (OuterVolumeSpecName: "kube-api-access-6pqzb") pod "1fad3d7d-2f02-4dcc-9cbe-72a13438bcda" (UID: "1fad3d7d-2f02-4dcc-9cbe-72a13438bcda"). InnerVolumeSpecName "kube-api-access-6pqzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.368326 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.368366 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pqzb\" (UniqueName: \"kubernetes.io/projected/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-kube-api-access-6pqzb\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.368378 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7mrs\" (UniqueName: \"kubernetes.io/projected/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-kube-api-access-p7mrs\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.368389 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.368417 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.368426 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.365479 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 22:10:53 crc kubenswrapper[5034]: W0105 22:10:53.371548 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeda1f147_b2fb_4349_ba17_674073870a4b.slice/crio-2152d44185d9dfb37a3903800fd2ffa17aff3c4304dd5c1ec3c12cb421f6845a WatchSource:0}: Error finding container 2152d44185d9dfb37a3903800fd2ffa17aff3c4304dd5c1ec3c12cb421f6845a: Status 404 returned error can't find the container with id 2152d44185d9dfb37a3903800fd2ffa17aff3c4304dd5c1ec3c12cb421f6845a Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.528601 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" event={"ID":"1fad3d7d-2f02-4dcc-9cbe-72a13438bcda","Type":"ContainerDied","Data":"4005c7611f2b8fbf1a49827dd0bb3661933dd9c54c96eb5c573f674693de1135"} Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.529203 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-dbhcb" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.533993 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" event={"ID":"8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58","Type":"ContainerDied","Data":"beb37deed6fe67b8af2e16eb93810bd5c7ca6228d9a526378fc69814f54828f2"} Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.534130 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-mrzlh" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.537826 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xzj87" event={"ID":"9317f553-2101-4507-8f08-52e23105b5c1","Type":"ContainerStarted","Data":"b97d5cd29ffddf99657a5c2482efc985154a68be845fbf75fb23b805c3a393b7"} Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.537860 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xzj87" event={"ID":"9317f553-2101-4507-8f08-52e23105b5c1","Type":"ContainerStarted","Data":"30e2967276fa8076789279ab9e5775ebc6a45cf363737fe4c6fbb99b755de869"} Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.544650 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eda1f147-b2fb-4349-ba17-674073870a4b","Type":"ContainerStarted","Data":"2152d44185d9dfb37a3903800fd2ffa17aff3c4304dd5c1ec3c12cb421f6845a"} Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.547687 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a6b236-e04b-494a-a18e-5d1a8a5ae02a","Type":"ContainerStarted","Data":"5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518"} Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.549891 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94526d3f-1e21-4eef-abb7-5cd05bfb1670","Type":"ContainerStarted","Data":"a9b0af71996b2f7b5cfc0164a2338f465cc5484f2c68ff42352cd8642afd9b56"} Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.552228 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-r74gm" event={"ID":"88903e3e-9598-4699-97e2-19bfc4287037","Type":"ContainerStarted","Data":"e50b6f578dfc5a8664e14dde8fb074a62b93841f85eee133b7c8ff0577fe6235"} Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.584247 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xzj87" podStartSLOduration=2.581524123 podStartE2EDuration="2.581524123s" podCreationTimestamp="2026-01-05 22:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:10:53.560024502 +0000 UTC m=+1145.932023941" watchObservedRunningTime="2026-01-05 22:10:53.581524123 +0000 UTC m=+1145.953523562" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.710420 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-zfqq7"] Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.736643 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-mrzlh"] Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.744730 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-mrzlh"] Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.763019 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-dbhcb"] Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.770940 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-dbhcb"] Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.852109 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fad3d7d-2f02-4dcc-9cbe-72a13438bcda" path="/var/lib/kubelet/pods/1fad3d7d-2f02-4dcc-9cbe-72a13438bcda/volumes" Jan 05 22:10:53 crc kubenswrapper[5034]: I0105 22:10:53.852952 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58" path="/var/lib/kubelet/pods/8b93b0d6-8bbd-4093-9b8e-3f987ab5ec58/volumes" Jan 05 22:10:54 crc kubenswrapper[5034]: I0105 22:10:54.566883 5034 generic.go:334] "Generic (PLEG): container finished" podID="79846781-e528-4b43-aacd-cbc32085ca10" containerID="3dbd50eef769e0db2dc32386f7c8ebfc702facef92f06a9ed6a138fc721a3e21" exitCode=0 Jan 05 22:10:54 crc kubenswrapper[5034]: I0105 22:10:54.566982 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" event={"ID":"79846781-e528-4b43-aacd-cbc32085ca10","Type":"ContainerDied","Data":"3dbd50eef769e0db2dc32386f7c8ebfc702facef92f06a9ed6a138fc721a3e21"} Jan 05 22:10:54 crc kubenswrapper[5034]: I0105 22:10:54.567273 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" event={"ID":"79846781-e528-4b43-aacd-cbc32085ca10","Type":"ContainerStarted","Data":"5c848a23b2367bf7708582f74025f39c1abc4b8728b4e3dac5e3a5b45791da1e"} Jan 05 22:10:54 crc kubenswrapper[5034]: I0105 22:10:54.569560 5034 generic.go:334] "Generic (PLEG): container finished" podID="88903e3e-9598-4699-97e2-19bfc4287037" containerID="2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98" exitCode=0 Jan 05 22:10:54 crc kubenswrapper[5034]: I0105 22:10:54.570255 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-r74gm" event={"ID":"88903e3e-9598-4699-97e2-19bfc4287037","Type":"ContainerDied","Data":"2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98"} Jan 05 22:10:55 crc kubenswrapper[5034]: I0105 22:10:55.590063 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eda1f147-b2fb-4349-ba17-674073870a4b","Type":"ContainerStarted","Data":"cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17"} Jan 05 22:10:55 crc kubenswrapper[5034]: I0105 22:10:55.590718 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eda1f147-b2fb-4349-ba17-674073870a4b","Type":"ContainerStarted","Data":"0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911"} Jan 05 22:10:55 crc kubenswrapper[5034]: I0105 22:10:55.590740 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 05 22:10:55 crc kubenswrapper[5034]: I0105 22:10:55.593503 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" event={"ID":"79846781-e528-4b43-aacd-cbc32085ca10","Type":"ContainerStarted","Data":"d6a80d4ad135d8f2aa8274f9023410d8716b8f0402953ca37061ac0cba988252"} Jan 05 22:10:55 crc kubenswrapper[5034]: I0105 22:10:55.593601 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:10:55 crc kubenswrapper[5034]: I0105 22:10:55.596110 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-r74gm" event={"ID":"88903e3e-9598-4699-97e2-19bfc4287037","Type":"ContainerStarted","Data":"bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1"} Jan 05 22:10:55 crc kubenswrapper[5034]: I0105 22:10:55.596266 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:10:55 crc kubenswrapper[5034]: I0105 22:10:55.632843 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" podStartSLOduration=3.093895758 podStartE2EDuration="3.632826077s" podCreationTimestamp="2026-01-05 22:10:52 +0000 UTC" firstStartedPulling="2026-01-05 22:10:53.63906294 +0000 UTC m=+1146.011062379" lastFinishedPulling="2026-01-05 22:10:54.177993259 +0000 UTC m=+1146.549992698" observedRunningTime="2026-01-05 22:10:55.627815395 +0000 UTC m=+1147.999814844" watchObservedRunningTime="2026-01-05 22:10:55.632826077 +0000 UTC m=+1148.004825516" Jan 05 22:10:55 crc kubenswrapper[5034]: I0105 22:10:55.634226 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.363213147 podStartE2EDuration="3.634220797s" podCreationTimestamp="2026-01-05 22:10:52 +0000 UTC" firstStartedPulling="2026-01-05 22:10:53.375608797 +0000 UTC m=+1145.747608226" lastFinishedPulling="2026-01-05 22:10:54.646616437 +0000 UTC m=+1147.018615876" observedRunningTime="2026-01-05 22:10:55.610181193 +0000 UTC m=+1147.982180632" watchObservedRunningTime="2026-01-05 22:10:55.634220797 +0000 UTC m=+1148.006220226" Jan 05 22:10:55 crc kubenswrapper[5034]: I0105 22:10:55.652683 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-794868bd45-r74gm" podStartSLOduration=4.128562274 podStartE2EDuration="4.652667101s" podCreationTimestamp="2026-01-05 22:10:51 +0000 UTC" firstStartedPulling="2026-01-05 22:10:53.203824031 +0000 UTC m=+1145.575823460" lastFinishedPulling="2026-01-05 22:10:53.727928848 +0000 UTC m=+1146.099928287" observedRunningTime="2026-01-05 22:10:55.647922217 +0000 UTC m=+1148.019921666" watchObservedRunningTime="2026-01-05 22:10:55.652667101 +0000 UTC m=+1148.024666540" Jan 05 22:10:56 crc kubenswrapper[5034]: I0105 22:10:56.651308 5034 generic.go:334] "Generic (PLEG): container finished" podID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" containerID="fe2064fa3b2b7e941e1493c9cb05377a7cb7976cfac4b8c759f921b2f44f5d59" exitCode=0 Jan 05 22:10:56 crc kubenswrapper[5034]: I0105 22:10:56.651398 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bb0c349d-e74e-49eb-ba86-8a435d15ba66","Type":"ContainerDied","Data":"fe2064fa3b2b7e941e1493c9cb05377a7cb7976cfac4b8c759f921b2f44f5d59"} Jan 05 22:10:56 crc kubenswrapper[5034]: I0105 22:10:56.656635 5034 generic.go:334] "Generic (PLEG): container finished" podID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerID="376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109" exitCode=0 Jan 05 22:10:56 crc kubenswrapper[5034]: I0105 22:10:56.656723 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8c1a7050-af42-4822-9bcb-cc8ea32bd319","Type":"ContainerDied","Data":"376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109"} Jan 05 22:10:57 crc kubenswrapper[5034]: I0105 22:10:57.667298 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8c1a7050-af42-4822-9bcb-cc8ea32bd319","Type":"ContainerStarted","Data":"ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365"} Jan 05 22:10:57 crc kubenswrapper[5034]: I0105 22:10:57.671738 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bb0c349d-e74e-49eb-ba86-8a435d15ba66","Type":"ContainerStarted","Data":"3960d5a24203a890e55b4c5a09107afdae62bb85f6aa67fa283d78bfd0a56edd"} Jan 05 22:10:57 crc kubenswrapper[5034]: I0105 22:10:57.701961 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.636252848 podStartE2EDuration="46.701928398s" podCreationTimestamp="2026-01-05 22:10:11 +0000 UTC" firstStartedPulling="2026-01-05 22:10:13.276258772 +0000 UTC m=+1105.648258211" lastFinishedPulling="2026-01-05 22:10:51.341934322 +0000 UTC m=+1143.713933761" observedRunningTime="2026-01-05 22:10:57.69566752 +0000 UTC m=+1150.067666959" watchObservedRunningTime="2026-01-05 22:10:57.701928398 +0000 UTC m=+1150.073927857" Jan 05 22:10:57 crc kubenswrapper[5034]: I0105 22:10:57.724130 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.0517797 podStartE2EDuration="45.724083599s" podCreationTimestamp="2026-01-05 22:10:12 +0000 UTC" firstStartedPulling="2026-01-05 22:10:14.739724277 +0000 UTC m=+1107.111723716" lastFinishedPulling="2026-01-05 22:10:51.412028176 +0000 UTC m=+1143.784027615" observedRunningTime="2026-01-05 22:10:57.718989664 +0000 UTC m=+1150.090989113" watchObservedRunningTime="2026-01-05 22:10:57.724083599 +0000 UTC m=+1150.096083038" Jan 05 22:10:59 crc kubenswrapper[5034]: I0105 22:10:59.604224 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 05 22:10:59 crc kubenswrapper[5034]: I0105 22:10:59.689654 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5dc94453-3c0f-4b4c-a23e-f2c88e41325c","Type":"ContainerStarted","Data":"85dfecff8a53f1e91c768d3fcda1317cf930084d81b47ee5eed865627d3cd7c7"} Jan 05 22:10:59 crc kubenswrapper[5034]: I0105 22:10:59.689949 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 05 22:10:59 crc kubenswrapper[5034]: I0105 22:10:59.714454 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.601612536 podStartE2EDuration="44.714420568s" podCreationTimestamp="2026-01-05 22:10:15 +0000 UTC" firstStartedPulling="2026-01-05 22:10:16.847870358 +0000 UTC m=+1109.219869797" lastFinishedPulling="2026-01-05 22:10:58.96067838 +0000 UTC m=+1151.332677829" observedRunningTime="2026-01-05 22:10:59.70851602 +0000 UTC m=+1152.080515449" watchObservedRunningTime="2026-01-05 22:10:59.714420568 +0000 UTC m=+1152.086420017" Jan 05 22:11:02 crc kubenswrapper[5034]: I0105 22:11:02.441381 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:11:02 crc kubenswrapper[5034]: I0105 22:11:02.563188 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:11:02 crc kubenswrapper[5034]: I0105 22:11:02.616577 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-r74gm"] Jan 05 22:11:02 crc kubenswrapper[5034]: I0105 22:11:02.717148 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-794868bd45-r74gm" podUID="88903e3e-9598-4699-97e2-19bfc4287037" containerName="dnsmasq-dns" containerID="cri-o://bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1" gracePeriod=10 Jan 05 22:11:02 crc kubenswrapper[5034]: I0105 22:11:02.782944 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 05 22:11:02 crc kubenswrapper[5034]: I0105 22:11:02.783000 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.158840 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.282954 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-config\") pod \"88903e3e-9598-4699-97e2-19bfc4287037\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.283186 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-dns-svc\") pod \"88903e3e-9598-4699-97e2-19bfc4287037\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.283283 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqdlg\" (UniqueName: \"kubernetes.io/projected/88903e3e-9598-4699-97e2-19bfc4287037-kube-api-access-fqdlg\") pod \"88903e3e-9598-4699-97e2-19bfc4287037\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.283426 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-ovsdbserver-sb\") pod \"88903e3e-9598-4699-97e2-19bfc4287037\" (UID: \"88903e3e-9598-4699-97e2-19bfc4287037\") " Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.299385 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88903e3e-9598-4699-97e2-19bfc4287037-kube-api-access-fqdlg" (OuterVolumeSpecName: "kube-api-access-fqdlg") pod "88903e3e-9598-4699-97e2-19bfc4287037" (UID: "88903e3e-9598-4699-97e2-19bfc4287037"). InnerVolumeSpecName "kube-api-access-fqdlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.330724 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-config" (OuterVolumeSpecName: "config") pod "88903e3e-9598-4699-97e2-19bfc4287037" (UID: "88903e3e-9598-4699-97e2-19bfc4287037"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.334930 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "88903e3e-9598-4699-97e2-19bfc4287037" (UID: "88903e3e-9598-4699-97e2-19bfc4287037"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.339796 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88903e3e-9598-4699-97e2-19bfc4287037" (UID: "88903e3e-9598-4699-97e2-19bfc4287037"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.385780 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.385866 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.385880 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqdlg\" (UniqueName: \"kubernetes.io/projected/88903e3e-9598-4699-97e2-19bfc4287037-kube-api-access-fqdlg\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.385891 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88903e3e-9598-4699-97e2-19bfc4287037-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.729327 5034 generic.go:334] "Generic (PLEG): container finished" podID="88903e3e-9598-4699-97e2-19bfc4287037" containerID="bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1" exitCode=0 Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.729380 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-r74gm" event={"ID":"88903e3e-9598-4699-97e2-19bfc4287037","Type":"ContainerDied","Data":"bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1"} Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.729433 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-r74gm" event={"ID":"88903e3e-9598-4699-97e2-19bfc4287037","Type":"ContainerDied","Data":"e50b6f578dfc5a8664e14dde8fb074a62b93841f85eee133b7c8ff0577fe6235"} Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.729459 5034 scope.go:117] "RemoveContainer" containerID="bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.729593 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-r74gm" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.761431 5034 scope.go:117] "RemoveContainer" containerID="2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.778478 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-r74gm"] Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.784202 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-r74gm"] Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.797531 5034 scope.go:117] "RemoveContainer" containerID="bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1" Jan 05 22:11:03 crc kubenswrapper[5034]: E0105 22:11:03.798009 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1\": container with ID starting with bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1 not found: ID does not exist" containerID="bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.798092 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1"} err="failed to get container status \"bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1\": rpc error: code = NotFound desc = could not find container \"bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1\": container with ID starting with bff4c35e652b5a50684776ed74a4572793ea9107a9df853e389b4860673414b1 not found: ID does not exist" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.798137 5034 scope.go:117] "RemoveContainer" containerID="2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98" Jan 05 22:11:03 crc kubenswrapper[5034]: E0105 22:11:03.798888 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98\": container with ID starting with 2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98 not found: ID does not exist" containerID="2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.798930 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98"} err="failed to get container status \"2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98\": rpc error: code = NotFound desc = could not find container \"2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98\": container with ID starting with 2b2572ffb44500a65826763532a402074fb9f6ab511bb269d253c5799a806a98 not found: ID does not exist" Jan 05 22:11:03 crc kubenswrapper[5034]: I0105 22:11:03.848283 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88903e3e-9598-4699-97e2-19bfc4287037" path="/var/lib/kubelet/pods/88903e3e-9598-4699-97e2-19bfc4287037/volumes" Jan 05 22:11:04 crc kubenswrapper[5034]: I0105 22:11:04.161556 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 05 22:11:04 crc kubenswrapper[5034]: I0105 22:11:04.161668 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 05 22:11:04 crc kubenswrapper[5034]: I0105 22:11:04.301942 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 05 22:11:04 crc kubenswrapper[5034]: I0105 22:11:04.827402 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 05 22:11:05 crc kubenswrapper[5034]: I0105 22:11:05.122777 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 05 22:11:05 crc kubenswrapper[5034]: I0105 22:11:05.201576 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 05 22:11:05 crc kubenswrapper[5034]: I0105 22:11:05.993800 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-f5l6r"] Jan 05 22:11:05 crc kubenswrapper[5034]: E0105 22:11:05.994619 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88903e3e-9598-4699-97e2-19bfc4287037" containerName="dnsmasq-dns" Jan 05 22:11:05 crc kubenswrapper[5034]: I0105 22:11:05.994637 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="88903e3e-9598-4699-97e2-19bfc4287037" containerName="dnsmasq-dns" Jan 05 22:11:05 crc kubenswrapper[5034]: E0105 22:11:05.994676 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88903e3e-9598-4699-97e2-19bfc4287037" containerName="init" Jan 05 22:11:05 crc kubenswrapper[5034]: I0105 22:11:05.994686 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="88903e3e-9598-4699-97e2-19bfc4287037" containerName="init" Jan 05 22:11:05 crc kubenswrapper[5034]: I0105 22:11:05.994967 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="88903e3e-9598-4699-97e2-19bfc4287037" containerName="dnsmasq-dns" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.005824 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.009033 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-f5l6r"] Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.041040 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.041145 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.041185 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-config\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.041289 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4lf\" (UniqueName: \"kubernetes.io/projected/47043bfb-044e-4b09-9c61-c97cf3b17a5e-kube-api-access-6d4lf\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.041308 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.054585 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.143513 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.143560 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-config\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.143641 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d4lf\" (UniqueName: \"kubernetes.io/projected/47043bfb-044e-4b09-9c61-c97cf3b17a5e-kube-api-access-6d4lf\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.143663 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.143774 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.144699 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.144966 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-config\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.145349 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.145751 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.164212 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d4lf\" (UniqueName: \"kubernetes.io/projected/47043bfb-044e-4b09-9c61-c97cf3b17a5e-kube-api-access-6d4lf\") pod \"dnsmasq-dns-6cb545bd4c-f5l6r\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.327944 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:06 crc kubenswrapper[5034]: I0105 22:11:06.767734 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-f5l6r"] Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.130211 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.136067 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.138034 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9hhtr" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.138266 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.138338 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.140274 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.156628 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.178525 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-lock\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.178579 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-cache\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.178617 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.178647 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.178676 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxddp\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-kube-api-access-kxddp\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.280152 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-lock\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.280225 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-cache\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.280272 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.280314 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.280356 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxddp\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-kube-api-access-kxddp\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.281444 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-lock\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.281826 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-cache\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: E0105 22:11:07.282168 5034 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 22:11:07 crc kubenswrapper[5034]: E0105 22:11:07.282200 5034 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 22:11:07 crc kubenswrapper[5034]: E0105 22:11:07.282254 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift podName:4402dece-5e7d-41e8-87e3-54ca201e2c52 nodeName:}" failed. No retries permitted until 2026-01-05 22:11:07.782232716 +0000 UTC m=+1160.154232175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift") pod "swift-storage-0" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52") : configmap "swift-ring-files" not found Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.282603 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.306187 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxddp\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-kube-api-access-kxddp\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.311742 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.612742 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.780038 5034 generic.go:334] "Generic (PLEG): container finished" podID="47043bfb-044e-4b09-9c61-c97cf3b17a5e" containerID="76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8" exitCode=0 Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.780115 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" event={"ID":"47043bfb-044e-4b09-9c61-c97cf3b17a5e","Type":"ContainerDied","Data":"76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8"} Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.780665 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" event={"ID":"47043bfb-044e-4b09-9c61-c97cf3b17a5e","Type":"ContainerStarted","Data":"51de5bc9b8073083337cfdf2744a65d31ecd9273bb9ed1437483b8182bbecc56"} Jan 05 22:11:07 crc kubenswrapper[5034]: I0105 22:11:07.794863 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:07 crc kubenswrapper[5034]: E0105 22:11:07.795198 5034 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 22:11:07 crc kubenswrapper[5034]: E0105 22:11:07.795220 5034 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 22:11:07 crc kubenswrapper[5034]: E0105 22:11:07.795299 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift podName:4402dece-5e7d-41e8-87e3-54ca201e2c52 nodeName:}" failed. No retries permitted until 2026-01-05 22:11:08.795273188 +0000 UTC m=+1161.167272637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift") pod "swift-storage-0" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52") : configmap "swift-ring-files" not found Jan 05 22:11:08 crc kubenswrapper[5034]: I0105 22:11:08.826372 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:08 crc kubenswrapper[5034]: E0105 22:11:08.826618 5034 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 22:11:08 crc kubenswrapper[5034]: E0105 22:11:08.827344 5034 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 22:11:08 crc kubenswrapper[5034]: E0105 22:11:08.827435 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift podName:4402dece-5e7d-41e8-87e3-54ca201e2c52 nodeName:}" failed. No retries permitted until 2026-01-05 22:11:10.827409604 +0000 UTC m=+1163.199409043 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift") pod "swift-storage-0" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52") : configmap "swift-ring-files" not found Jan 05 22:11:09 crc kubenswrapper[5034]: I0105 22:11:09.908558 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ea2b-account-create-update-6jx2r"] Jan 05 22:11:09 crc kubenswrapper[5034]: I0105 22:11:09.912533 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ea2b-account-create-update-6jx2r" Jan 05 22:11:09 crc kubenswrapper[5034]: I0105 22:11:09.918483 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 05 22:11:09 crc kubenswrapper[5034]: I0105 22:11:09.928945 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ea2b-account-create-update-6jx2r"] Jan 05 22:11:09 crc kubenswrapper[5034]: I0105 22:11:09.946197 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8nxg9"] Jan 05 22:11:09 crc kubenswrapper[5034]: I0105 22:11:09.947721 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8nxg9" Jan 05 22:11:09 crc kubenswrapper[5034]: I0105 22:11:09.953688 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6985bd-1df9-4935-9303-399e57584e90-operator-scripts\") pod \"glance-ea2b-account-create-update-6jx2r\" (UID: \"2b6985bd-1df9-4935-9303-399e57584e90\") " pod="openstack/glance-ea2b-account-create-update-6jx2r" Jan 05 22:11:09 crc kubenswrapper[5034]: I0105 22:11:09.953746 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4dd\" (UniqueName: \"kubernetes.io/projected/2b6985bd-1df9-4935-9303-399e57584e90-kube-api-access-xs4dd\") pod \"glance-ea2b-account-create-update-6jx2r\" (UID: \"2b6985bd-1df9-4935-9303-399e57584e90\") " pod="openstack/glance-ea2b-account-create-update-6jx2r" Jan 05 22:11:09 crc kubenswrapper[5034]: I0105 22:11:09.956014 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8nxg9"] Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.055888 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6985bd-1df9-4935-9303-399e57584e90-operator-scripts\") pod \"glance-ea2b-account-create-update-6jx2r\" (UID: \"2b6985bd-1df9-4935-9303-399e57584e90\") " pod="openstack/glance-ea2b-account-create-update-6jx2r" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.055989 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4dd\" (UniqueName: \"kubernetes.io/projected/2b6985bd-1df9-4935-9303-399e57584e90-kube-api-access-xs4dd\") pod \"glance-ea2b-account-create-update-6jx2r\" (UID: \"2b6985bd-1df9-4935-9303-399e57584e90\") " pod="openstack/glance-ea2b-account-create-update-6jx2r" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.056040 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9f5\" (UniqueName: \"kubernetes.io/projected/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-kube-api-access-mb9f5\") pod \"glance-db-create-8nxg9\" (UID: \"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b\") " pod="openstack/glance-db-create-8nxg9" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.056172 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-operator-scripts\") pod \"glance-db-create-8nxg9\" (UID: \"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b\") " pod="openstack/glance-db-create-8nxg9" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.057319 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6985bd-1df9-4935-9303-399e57584e90-operator-scripts\") pod \"glance-ea2b-account-create-update-6jx2r\" (UID: \"2b6985bd-1df9-4935-9303-399e57584e90\") " pod="openstack/glance-ea2b-account-create-update-6jx2r" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.079618 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4dd\" (UniqueName: \"kubernetes.io/projected/2b6985bd-1df9-4935-9303-399e57584e90-kube-api-access-xs4dd\") pod \"glance-ea2b-account-create-update-6jx2r\" (UID: \"2b6985bd-1df9-4935-9303-399e57584e90\") " pod="openstack/glance-ea2b-account-create-update-6jx2r" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.158559 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb9f5\" (UniqueName: \"kubernetes.io/projected/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-kube-api-access-mb9f5\") pod \"glance-db-create-8nxg9\" (UID: \"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b\") " pod="openstack/glance-db-create-8nxg9" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.158665 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-operator-scripts\") pod \"glance-db-create-8nxg9\" (UID: \"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b\") " pod="openstack/glance-db-create-8nxg9" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.159552 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-operator-scripts\") pod \"glance-db-create-8nxg9\" (UID: \"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b\") " pod="openstack/glance-db-create-8nxg9" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.186935 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb9f5\" (UniqueName: \"kubernetes.io/projected/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-kube-api-access-mb9f5\") pod \"glance-db-create-8nxg9\" (UID: \"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b\") " pod="openstack/glance-db-create-8nxg9" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.245036 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ea2b-account-create-update-6jx2r" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.267516 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8nxg9" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.709335 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ea2b-account-create-update-6jx2r"] Jan 05 22:11:10 crc kubenswrapper[5034]: W0105 22:11:10.718307 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b6985bd_1df9_4935_9303_399e57584e90.slice/crio-9151ff7a3b7640663e4db7c4dab4923b8522abb0f109533e9edd295f1325b810 WatchSource:0}: Error finding container 9151ff7a3b7640663e4db7c4dab4923b8522abb0f109533e9edd295f1325b810: Status 404 returned error can't find the container with id 9151ff7a3b7640663e4db7c4dab4923b8522abb0f109533e9edd295f1325b810 Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.785975 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8nxg9"] Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.811314 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" event={"ID":"47043bfb-044e-4b09-9c61-c97cf3b17a5e","Type":"ContainerStarted","Data":"5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94"} Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.811461 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.812766 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8nxg9" event={"ID":"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b","Type":"ContainerStarted","Data":"b38472431054b41694e1d5ab0aec39a2d60a61a2695000fbd9649b7e539c91bb"} Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.814293 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ea2b-account-create-update-6jx2r" event={"ID":"2b6985bd-1df9-4935-9303-399e57584e90","Type":"ContainerStarted","Data":"9151ff7a3b7640663e4db7c4dab4923b8522abb0f109533e9edd295f1325b810"} Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.829536 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" podStartSLOduration=5.829511509 podStartE2EDuration="5.829511509s" podCreationTimestamp="2026-01-05 22:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:11:10.826252457 +0000 UTC m=+1163.198251916" watchObservedRunningTime="2026-01-05 22:11:10.829511509 +0000 UTC m=+1163.201510948" Jan 05 22:11:10 crc kubenswrapper[5034]: I0105 22:11:10.870731 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:10 crc kubenswrapper[5034]: E0105 22:11:10.870987 5034 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 22:11:10 crc kubenswrapper[5034]: E0105 22:11:10.871011 5034 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 22:11:10 crc kubenswrapper[5034]: E0105 22:11:10.871160 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift podName:4402dece-5e7d-41e8-87e3-54ca201e2c52 nodeName:}" failed. No retries permitted until 2026-01-05 22:11:14.871056961 +0000 UTC m=+1167.243056400 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift") pod "swift-storage-0" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52") : configmap "swift-ring-files" not found Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.059110 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8wjbq"] Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.060438 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.066616 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.066700 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.067161 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.071341 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8wjbq"] Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.177945 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p56r5\" (UniqueName: \"kubernetes.io/projected/5158186d-181d-498c-8eeb-c222566958f7-kube-api-access-p56r5\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.178559 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-swiftconf\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.178620 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5158186d-181d-498c-8eeb-c222566958f7-etc-swift\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.178831 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-dispersionconf\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.178921 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-combined-ca-bundle\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.178997 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-ring-data-devices\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.179035 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-scripts\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.280819 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-dispersionconf\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.280919 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-combined-ca-bundle\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.280960 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-ring-data-devices\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.281053 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-scripts\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.281913 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-scripts\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.282307 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-ring-data-devices\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.282423 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p56r5\" (UniqueName: \"kubernetes.io/projected/5158186d-181d-498c-8eeb-c222566958f7-kube-api-access-p56r5\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.282506 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-swiftconf\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.282564 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5158186d-181d-498c-8eeb-c222566958f7-etc-swift\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.282812 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5158186d-181d-498c-8eeb-c222566958f7-etc-swift\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.287282 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-combined-ca-bundle\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.287302 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-dispersionconf\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.296712 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-swiftconf\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.313639 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p56r5\" (UniqueName: \"kubernetes.io/projected/5158186d-181d-498c-8eeb-c222566958f7-kube-api-access-p56r5\") pod \"swift-ring-rebalance-8wjbq\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.424663 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zssqh"] Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.425701 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zssqh" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.431916 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.436195 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zssqh"] Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.488166 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-operator-scripts\") pod \"root-account-create-update-zssqh\" (UID: \"da683ba3-7b16-4adc-9eb1-4a986a53e8ac\") " pod="openstack/root-account-create-update-zssqh" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.488273 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv8kr\" (UniqueName: \"kubernetes.io/projected/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-kube-api-access-lv8kr\") pod \"root-account-create-update-zssqh\" (UID: \"da683ba3-7b16-4adc-9eb1-4a986a53e8ac\") " pod="openstack/root-account-create-update-zssqh" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.551586 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.589475 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-operator-scripts\") pod \"root-account-create-update-zssqh\" (UID: \"da683ba3-7b16-4adc-9eb1-4a986a53e8ac\") " pod="openstack/root-account-create-update-zssqh" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.589562 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv8kr\" (UniqueName: \"kubernetes.io/projected/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-kube-api-access-lv8kr\") pod \"root-account-create-update-zssqh\" (UID: \"da683ba3-7b16-4adc-9eb1-4a986a53e8ac\") " pod="openstack/root-account-create-update-zssqh" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.590420 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-operator-scripts\") pod \"root-account-create-update-zssqh\" (UID: \"da683ba3-7b16-4adc-9eb1-4a986a53e8ac\") " pod="openstack/root-account-create-update-zssqh" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.605611 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv8kr\" (UniqueName: \"kubernetes.io/projected/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-kube-api-access-lv8kr\") pod \"root-account-create-update-zssqh\" (UID: \"da683ba3-7b16-4adc-9eb1-4a986a53e8ac\") " pod="openstack/root-account-create-update-zssqh" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.752516 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zssqh" Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.824051 5034 generic.go:334] "Generic (PLEG): container finished" podID="a5121c9a-d20b-4f58-b7f1-58852b2f4e1b" containerID="a99fecf8f29d0a8e970efa45fd27a60018d29e40f8d02a4683a436301044a188" exitCode=0 Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.824117 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8nxg9" event={"ID":"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b","Type":"ContainerDied","Data":"a99fecf8f29d0a8e970efa45fd27a60018d29e40f8d02a4683a436301044a188"} Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.827956 5034 generic.go:334] "Generic (PLEG): container finished" podID="2b6985bd-1df9-4935-9303-399e57584e90" containerID="b249c795882794edb5ec5acb2049718d190e4b644203126c284a968743e89077" exitCode=0 Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.828058 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ea2b-account-create-update-6jx2r" event={"ID":"2b6985bd-1df9-4935-9303-399e57584e90","Type":"ContainerDied","Data":"b249c795882794edb5ec5acb2049718d190e4b644203126c284a968743e89077"} Jan 05 22:11:11 crc kubenswrapper[5034]: I0105 22:11:11.996670 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8wjbq"] Jan 05 22:11:12 crc kubenswrapper[5034]: I0105 22:11:12.197366 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zssqh"] Jan 05 22:11:12 crc kubenswrapper[5034]: I0105 22:11:12.864994 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8wjbq" event={"ID":"5158186d-181d-498c-8eeb-c222566958f7","Type":"ContainerStarted","Data":"0e323d6437034b1eb8526a8bf58b15973ad28a88e37e7ceecca443b72e468f61"} Jan 05 22:11:12 crc kubenswrapper[5034]: I0105 22:11:12.866928 5034 generic.go:334] "Generic (PLEG): container finished" podID="da683ba3-7b16-4adc-9eb1-4a986a53e8ac" containerID="4a876f6ca118f3044d36bf8081d2cee6ca90bf93e157fd01110cb38a9db2b531" exitCode=0 Jan 05 22:11:12 crc kubenswrapper[5034]: I0105 22:11:12.867285 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zssqh" event={"ID":"da683ba3-7b16-4adc-9eb1-4a986a53e8ac","Type":"ContainerDied","Data":"4a876f6ca118f3044d36bf8081d2cee6ca90bf93e157fd01110cb38a9db2b531"} Jan 05 22:11:12 crc kubenswrapper[5034]: I0105 22:11:12.867351 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zssqh" event={"ID":"da683ba3-7b16-4adc-9eb1-4a986a53e8ac","Type":"ContainerStarted","Data":"e1483bbcdff81eb73c66377af991e0f02dbab4861a12a1085b01bacee083b2f1"} Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.397768 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8nxg9" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.404358 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ea2b-account-create-update-6jx2r" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.531677 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6985bd-1df9-4935-9303-399e57584e90-operator-scripts\") pod \"2b6985bd-1df9-4935-9303-399e57584e90\" (UID: \"2b6985bd-1df9-4935-9303-399e57584e90\") " Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.531717 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs4dd\" (UniqueName: \"kubernetes.io/projected/2b6985bd-1df9-4935-9303-399e57584e90-kube-api-access-xs4dd\") pod \"2b6985bd-1df9-4935-9303-399e57584e90\" (UID: \"2b6985bd-1df9-4935-9303-399e57584e90\") " Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.532853 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b6985bd-1df9-4935-9303-399e57584e90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b6985bd-1df9-4935-9303-399e57584e90" (UID: "2b6985bd-1df9-4935-9303-399e57584e90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.533339 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-operator-scripts\") pod \"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b\" (UID: \"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b\") " Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.533479 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb9f5\" (UniqueName: \"kubernetes.io/projected/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-kube-api-access-mb9f5\") pod \"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b\" (UID: \"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b\") " Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.533963 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5121c9a-d20b-4f58-b7f1-58852b2f4e1b" (UID: "a5121c9a-d20b-4f58-b7f1-58852b2f4e1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.534342 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6985bd-1df9-4935-9303-399e57584e90-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.534363 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.539492 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b6985bd-1df9-4935-9303-399e57584e90-kube-api-access-xs4dd" (OuterVolumeSpecName: "kube-api-access-xs4dd") pod "2b6985bd-1df9-4935-9303-399e57584e90" (UID: "2b6985bd-1df9-4935-9303-399e57584e90"). InnerVolumeSpecName "kube-api-access-xs4dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.542311 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-kube-api-access-mb9f5" (OuterVolumeSpecName: "kube-api-access-mb9f5") pod "a5121c9a-d20b-4f58-b7f1-58852b2f4e1b" (UID: "a5121c9a-d20b-4f58-b7f1-58852b2f4e1b"). InnerVolumeSpecName "kube-api-access-mb9f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.636926 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb9f5\" (UniqueName: \"kubernetes.io/projected/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b-kube-api-access-mb9f5\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.636961 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs4dd\" (UniqueName: \"kubernetes.io/projected/2b6985bd-1df9-4935-9303-399e57584e90-kube-api-access-xs4dd\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.878424 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8nxg9" event={"ID":"a5121c9a-d20b-4f58-b7f1-58852b2f4e1b","Type":"ContainerDied","Data":"b38472431054b41694e1d5ab0aec39a2d60a61a2695000fbd9649b7e539c91bb"} Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.878483 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8nxg9" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.878492 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38472431054b41694e1d5ab0aec39a2d60a61a2695000fbd9649b7e539c91bb" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.880167 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ea2b-account-create-update-6jx2r" Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.881991 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ea2b-account-create-update-6jx2r" event={"ID":"2b6985bd-1df9-4935-9303-399e57584e90","Type":"ContainerDied","Data":"9151ff7a3b7640663e4db7c4dab4923b8522abb0f109533e9edd295f1325b810"} Jan 05 22:11:13 crc kubenswrapper[5034]: I0105 22:11:13.882016 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9151ff7a3b7640663e4db7c4dab4923b8522abb0f109533e9edd295f1325b810" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.118272 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2xs4w"] Jan 05 22:11:14 crc kubenswrapper[5034]: E0105 22:11:14.119306 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5121c9a-d20b-4f58-b7f1-58852b2f4e1b" containerName="mariadb-database-create" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.119322 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5121c9a-d20b-4f58-b7f1-58852b2f4e1b" containerName="mariadb-database-create" Jan 05 22:11:14 crc kubenswrapper[5034]: E0105 22:11:14.119380 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6985bd-1df9-4935-9303-399e57584e90" containerName="mariadb-account-create-update" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.119388 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6985bd-1df9-4935-9303-399e57584e90" containerName="mariadb-account-create-update" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.119665 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6985bd-1df9-4935-9303-399e57584e90" containerName="mariadb-account-create-update" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.119684 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5121c9a-d20b-4f58-b7f1-58852b2f4e1b" containerName="mariadb-database-create" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.123743 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2xs4w" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.139835 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2xs4w"] Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.218018 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4d3f-account-create-update-q7fvm"] Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.219608 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d3f-account-create-update-q7fvm" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.222687 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.227394 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4d3f-account-create-update-q7fvm"] Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.247671 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lchr\" (UniqueName: \"kubernetes.io/projected/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-kube-api-access-6lchr\") pod \"keystone-db-create-2xs4w\" (UID: \"2bc6c217-9ff1-47b6-a60d-9029e501d9e0\") " pod="openstack/keystone-db-create-2xs4w" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.247749 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-operator-scripts\") pod \"keystone-db-create-2xs4w\" (UID: \"2bc6c217-9ff1-47b6-a60d-9029e501d9e0\") " pod="openstack/keystone-db-create-2xs4w" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.349592 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb498b0-229a-430a-8fb9-4311f3c7cd88-operator-scripts\") pod \"keystone-4d3f-account-create-update-q7fvm\" (UID: \"8bb498b0-229a-430a-8fb9-4311f3c7cd88\") " pod="openstack/keystone-4d3f-account-create-update-q7fvm" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.349716 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-operator-scripts\") pod \"keystone-db-create-2xs4w\" (UID: \"2bc6c217-9ff1-47b6-a60d-9029e501d9e0\") " pod="openstack/keystone-db-create-2xs4w" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.350566 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-operator-scripts\") pod \"keystone-db-create-2xs4w\" (UID: \"2bc6c217-9ff1-47b6-a60d-9029e501d9e0\") " pod="openstack/keystone-db-create-2xs4w" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.350696 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jrxf\" (UniqueName: \"kubernetes.io/projected/8bb498b0-229a-430a-8fb9-4311f3c7cd88-kube-api-access-2jrxf\") pod \"keystone-4d3f-account-create-update-q7fvm\" (UID: \"8bb498b0-229a-430a-8fb9-4311f3c7cd88\") " pod="openstack/keystone-4d3f-account-create-update-q7fvm" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.350788 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lchr\" (UniqueName: \"kubernetes.io/projected/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-kube-api-access-6lchr\") pod \"keystone-db-create-2xs4w\" (UID: \"2bc6c217-9ff1-47b6-a60d-9029e501d9e0\") " pod="openstack/keystone-db-create-2xs4w" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.396424 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lchr\" (UniqueName: \"kubernetes.io/projected/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-kube-api-access-6lchr\") pod \"keystone-db-create-2xs4w\" (UID: \"2bc6c217-9ff1-47b6-a60d-9029e501d9e0\") " pod="openstack/keystone-db-create-2xs4w" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.411138 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-c82dz"] Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.412526 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c82dz" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.426512 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c82dz"] Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.447361 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2xs4w" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.459262 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb498b0-229a-430a-8fb9-4311f3c7cd88-operator-scripts\") pod \"keystone-4d3f-account-create-update-q7fvm\" (UID: \"8bb498b0-229a-430a-8fb9-4311f3c7cd88\") " pod="openstack/keystone-4d3f-account-create-update-q7fvm" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.459561 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhbhz\" (UniqueName: \"kubernetes.io/projected/a1b470b5-b9ec-4d92-8965-9c0be5366721-kube-api-access-nhbhz\") pod \"placement-db-create-c82dz\" (UID: \"a1b470b5-b9ec-4d92-8965-9c0be5366721\") " pod="openstack/placement-db-create-c82dz" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.460200 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jrxf\" (UniqueName: \"kubernetes.io/projected/8bb498b0-229a-430a-8fb9-4311f3c7cd88-kube-api-access-2jrxf\") pod \"keystone-4d3f-account-create-update-q7fvm\" (UID: \"8bb498b0-229a-430a-8fb9-4311f3c7cd88\") " pod="openstack/keystone-4d3f-account-create-update-q7fvm" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.460449 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1b470b5-b9ec-4d92-8965-9c0be5366721-operator-scripts\") pod \"placement-db-create-c82dz\" (UID: \"a1b470b5-b9ec-4d92-8965-9c0be5366721\") " pod="openstack/placement-db-create-c82dz" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.463231 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb498b0-229a-430a-8fb9-4311f3c7cd88-operator-scripts\") pod \"keystone-4d3f-account-create-update-q7fvm\" (UID: \"8bb498b0-229a-430a-8fb9-4311f3c7cd88\") " pod="openstack/keystone-4d3f-account-create-update-q7fvm" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.513423 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jrxf\" (UniqueName: \"kubernetes.io/projected/8bb498b0-229a-430a-8fb9-4311f3c7cd88-kube-api-access-2jrxf\") pod \"keystone-4d3f-account-create-update-q7fvm\" (UID: \"8bb498b0-229a-430a-8fb9-4311f3c7cd88\") " pod="openstack/keystone-4d3f-account-create-update-q7fvm" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.539869 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8420-account-create-update-kv5xq"] Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.540556 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d3f-account-create-update-q7fvm" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.541480 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8420-account-create-update-kv5xq" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.544549 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.550039 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8420-account-create-update-kv5xq"] Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.564508 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1b470b5-b9ec-4d92-8965-9c0be5366721-operator-scripts\") pod \"placement-db-create-c82dz\" (UID: \"a1b470b5-b9ec-4d92-8965-9c0be5366721\") " pod="openstack/placement-db-create-c82dz" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.564665 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhbhz\" (UniqueName: \"kubernetes.io/projected/a1b470b5-b9ec-4d92-8965-9c0be5366721-kube-api-access-nhbhz\") pod \"placement-db-create-c82dz\" (UID: \"a1b470b5-b9ec-4d92-8965-9c0be5366721\") " pod="openstack/placement-db-create-c82dz" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.565361 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1b470b5-b9ec-4d92-8965-9c0be5366721-operator-scripts\") pod \"placement-db-create-c82dz\" (UID: \"a1b470b5-b9ec-4d92-8965-9c0be5366721\") " pod="openstack/placement-db-create-c82dz" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.583917 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhbhz\" (UniqueName: \"kubernetes.io/projected/a1b470b5-b9ec-4d92-8965-9c0be5366721-kube-api-access-nhbhz\") pod \"placement-db-create-c82dz\" (UID: \"a1b470b5-b9ec-4d92-8965-9c0be5366721\") " pod="openstack/placement-db-create-c82dz" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.668789 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31cf2934-c66e-40ca-81f5-26c0efff8bd4-operator-scripts\") pod \"placement-8420-account-create-update-kv5xq\" (UID: \"31cf2934-c66e-40ca-81f5-26c0efff8bd4\") " pod="openstack/placement-8420-account-create-update-kv5xq" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.670092 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ndtk\" (UniqueName: \"kubernetes.io/projected/31cf2934-c66e-40ca-81f5-26c0efff8bd4-kube-api-access-5ndtk\") pod \"placement-8420-account-create-update-kv5xq\" (UID: \"31cf2934-c66e-40ca-81f5-26c0efff8bd4\") " pod="openstack/placement-8420-account-create-update-kv5xq" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.772473 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ndtk\" (UniqueName: \"kubernetes.io/projected/31cf2934-c66e-40ca-81f5-26c0efff8bd4-kube-api-access-5ndtk\") pod \"placement-8420-account-create-update-kv5xq\" (UID: \"31cf2934-c66e-40ca-81f5-26c0efff8bd4\") " pod="openstack/placement-8420-account-create-update-kv5xq" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.772551 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31cf2934-c66e-40ca-81f5-26c0efff8bd4-operator-scripts\") pod \"placement-8420-account-create-update-kv5xq\" (UID: \"31cf2934-c66e-40ca-81f5-26c0efff8bd4\") " pod="openstack/placement-8420-account-create-update-kv5xq" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.773392 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31cf2934-c66e-40ca-81f5-26c0efff8bd4-operator-scripts\") pod \"placement-8420-account-create-update-kv5xq\" (UID: \"31cf2934-c66e-40ca-81f5-26c0efff8bd4\") " pod="openstack/placement-8420-account-create-update-kv5xq" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.776158 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c82dz" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.791105 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ndtk\" (UniqueName: \"kubernetes.io/projected/31cf2934-c66e-40ca-81f5-26c0efff8bd4-kube-api-access-5ndtk\") pod \"placement-8420-account-create-update-kv5xq\" (UID: \"31cf2934-c66e-40ca-81f5-26c0efff8bd4\") " pod="openstack/placement-8420-account-create-update-kv5xq" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.873715 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8420-account-create-update-kv5xq" Jan 05 22:11:14 crc kubenswrapper[5034]: I0105 22:11:14.874517 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:14 crc kubenswrapper[5034]: E0105 22:11:14.874692 5034 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 22:11:14 crc kubenswrapper[5034]: E0105 22:11:14.874709 5034 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 22:11:14 crc kubenswrapper[5034]: E0105 22:11:14.874760 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift podName:4402dece-5e7d-41e8-87e3-54ca201e2c52 nodeName:}" failed. No retries permitted until 2026-01-05 22:11:22.874741576 +0000 UTC m=+1175.246741015 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift") pod "swift-storage-0" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52") : configmap "swift-ring-files" not found Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.134931 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5l8t7"] Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.136344 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.140173 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.145445 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-65j9f" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.151512 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5l8t7"] Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.286098 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-db-sync-config-data\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.286232 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pb25\" (UniqueName: \"kubernetes.io/projected/492f33ff-82d7-4355-a412-faf4e879a228-kube-api-access-4pb25\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.286271 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-config-data\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.286300 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-combined-ca-bundle\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.388406 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-db-sync-config-data\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.388521 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pb25\" (UniqueName: \"kubernetes.io/projected/492f33ff-82d7-4355-a412-faf4e879a228-kube-api-access-4pb25\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.388566 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-config-data\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.388590 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-combined-ca-bundle\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.394206 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-config-data\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.397638 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-combined-ca-bundle\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.404869 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-db-sync-config-data\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.415495 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pb25\" (UniqueName: \"kubernetes.io/projected/492f33ff-82d7-4355-a412-faf4e879a228-kube-api-access-4pb25\") pod \"glance-db-sync-5l8t7\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.462050 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:15 crc kubenswrapper[5034]: I0105 22:11:15.999057 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zssqh" Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.101514 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-operator-scripts\") pod \"da683ba3-7b16-4adc-9eb1-4a986a53e8ac\" (UID: \"da683ba3-7b16-4adc-9eb1-4a986a53e8ac\") " Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.101692 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv8kr\" (UniqueName: \"kubernetes.io/projected/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-kube-api-access-lv8kr\") pod \"da683ba3-7b16-4adc-9eb1-4a986a53e8ac\" (UID: \"da683ba3-7b16-4adc-9eb1-4a986a53e8ac\") " Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.102648 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da683ba3-7b16-4adc-9eb1-4a986a53e8ac" (UID: "da683ba3-7b16-4adc-9eb1-4a986a53e8ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.113101 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-kube-api-access-lv8kr" (OuterVolumeSpecName: "kube-api-access-lv8kr") pod "da683ba3-7b16-4adc-9eb1-4a986a53e8ac" (UID: "da683ba3-7b16-4adc-9eb1-4a986a53e8ac"). InnerVolumeSpecName "kube-api-access-lv8kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.204402 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.204468 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv8kr\" (UniqueName: \"kubernetes.io/projected/da683ba3-7b16-4adc-9eb1-4a986a53e8ac-kube-api-access-lv8kr\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.330348 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.407630 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-zfqq7"] Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.407865 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" podUID="79846781-e528-4b43-aacd-cbc32085ca10" containerName="dnsmasq-dns" containerID="cri-o://d6a80d4ad135d8f2aa8274f9023410d8716b8f0402953ca37061ac0cba988252" gracePeriod=10 Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.910437 5034 generic.go:334] "Generic (PLEG): container finished" podID="79846781-e528-4b43-aacd-cbc32085ca10" containerID="d6a80d4ad135d8f2aa8274f9023410d8716b8f0402953ca37061ac0cba988252" exitCode=0 Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.910523 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" event={"ID":"79846781-e528-4b43-aacd-cbc32085ca10","Type":"ContainerDied","Data":"d6a80d4ad135d8f2aa8274f9023410d8716b8f0402953ca37061ac0cba988252"} Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.912003 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zssqh" event={"ID":"da683ba3-7b16-4adc-9eb1-4a986a53e8ac","Type":"ContainerDied","Data":"e1483bbcdff81eb73c66377af991e0f02dbab4861a12a1085b01bacee083b2f1"} Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.912038 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1483bbcdff81eb73c66377af991e0f02dbab4861a12a1085b01bacee083b2f1" Jan 05 22:11:16 crc kubenswrapper[5034]: I0105 22:11:16.912116 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zssqh" Jan 05 22:11:17 crc kubenswrapper[5034]: I0105 22:11:17.877628 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zssqh"] Jan 05 22:11:17 crc kubenswrapper[5034]: I0105 22:11:17.891796 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zssqh"] Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.060518 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.189224 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-dns-svc\") pod \"79846781-e528-4b43-aacd-cbc32085ca10\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.189448 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntbgg\" (UniqueName: \"kubernetes.io/projected/79846781-e528-4b43-aacd-cbc32085ca10-kube-api-access-ntbgg\") pod \"79846781-e528-4b43-aacd-cbc32085ca10\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.189507 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-config\") pod \"79846781-e528-4b43-aacd-cbc32085ca10\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.189535 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-nb\") pod \"79846781-e528-4b43-aacd-cbc32085ca10\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.190307 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-sb\") pod \"79846781-e528-4b43-aacd-cbc32085ca10\" (UID: \"79846781-e528-4b43-aacd-cbc32085ca10\") " Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.195486 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79846781-e528-4b43-aacd-cbc32085ca10-kube-api-access-ntbgg" (OuterVolumeSpecName: "kube-api-access-ntbgg") pod "79846781-e528-4b43-aacd-cbc32085ca10" (UID: "79846781-e528-4b43-aacd-cbc32085ca10"). InnerVolumeSpecName "kube-api-access-ntbgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.231822 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79846781-e528-4b43-aacd-cbc32085ca10" (UID: "79846781-e528-4b43-aacd-cbc32085ca10"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.232954 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-config" (OuterVolumeSpecName: "config") pod "79846781-e528-4b43-aacd-cbc32085ca10" (UID: "79846781-e528-4b43-aacd-cbc32085ca10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.233507 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79846781-e528-4b43-aacd-cbc32085ca10" (UID: "79846781-e528-4b43-aacd-cbc32085ca10"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.240715 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79846781-e528-4b43-aacd-cbc32085ca10" (UID: "79846781-e528-4b43-aacd-cbc32085ca10"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.292492 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.292529 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.292541 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntbgg\" (UniqueName: \"kubernetes.io/projected/79846781-e528-4b43-aacd-cbc32085ca10-kube-api-access-ntbgg\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.292554 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.292563 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79846781-e528-4b43-aacd-cbc32085ca10-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.377926 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8420-account-create-update-kv5xq"] Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.389276 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c82dz"] Jan 05 22:11:18 crc kubenswrapper[5034]: W0105 22:11:18.392481 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31cf2934_c66e_40ca_81f5_26c0efff8bd4.slice/crio-9b441095a0fa9538e92cb5e9db91f5f13ab2ec20eab7cd50ef9d09d2d43b2b95 WatchSource:0}: Error finding container 9b441095a0fa9538e92cb5e9db91f5f13ab2ec20eab7cd50ef9d09d2d43b2b95: Status 404 returned error can't find the container with id 9b441095a0fa9538e92cb5e9db91f5f13ab2ec20eab7cd50ef9d09d2d43b2b95 Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.433966 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2xs4w"] Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.456991 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4d3f-account-create-update-q7fvm"] Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.493939 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5l8t7"] Jan 05 22:11:18 crc kubenswrapper[5034]: W0105 22:11:18.508520 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod492f33ff_82d7_4355_a412_faf4e879a228.slice/crio-fc54392cb0e83a068b04c558619de55802956a680a0b25014ca31ee6e01daef8 WatchSource:0}: Error finding container fc54392cb0e83a068b04c558619de55802956a680a0b25014ca31ee6e01daef8: Status 404 returned error can't find the container with id fc54392cb0e83a068b04c558619de55802956a680a0b25014ca31ee6e01daef8 Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.938656 5034 generic.go:334] "Generic (PLEG): container finished" podID="2bc6c217-9ff1-47b6-a60d-9029e501d9e0" containerID="77ee3211f16dd4c1bd18b9108069d4dd9647d3634a5906976fc4a938a4bf9f37" exitCode=0 Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.939116 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2xs4w" event={"ID":"2bc6c217-9ff1-47b6-a60d-9029e501d9e0","Type":"ContainerDied","Data":"77ee3211f16dd4c1bd18b9108069d4dd9647d3634a5906976fc4a938a4bf9f37"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.939156 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2xs4w" event={"ID":"2bc6c217-9ff1-47b6-a60d-9029e501d9e0","Type":"ContainerStarted","Data":"5012ba47ec087e36edf29b85b6d9ab7a1ad6771df617542fce8800acca13fdda"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.947345 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" event={"ID":"79846781-e528-4b43-aacd-cbc32085ca10","Type":"ContainerDied","Data":"5c848a23b2367bf7708582f74025f39c1abc4b8728b4e3dac5e3a5b45791da1e"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.947392 5034 scope.go:117] "RemoveContainer" containerID="d6a80d4ad135d8f2aa8274f9023410d8716b8f0402953ca37061ac0cba988252" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.947526 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.953355 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8wjbq" event={"ID":"5158186d-181d-498c-8eeb-c222566958f7","Type":"ContainerStarted","Data":"bfb7b48fb39b141b2930622dfff5f754765edc64fa5517e3d7f0bc67c49e0300"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.959042 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8420-account-create-update-kv5xq" event={"ID":"31cf2934-c66e-40ca-81f5-26c0efff8bd4","Type":"ContainerStarted","Data":"9dfbea345804939d23848ac8f2b0cb84a5be4b7898cb0b1bd0d64e7e30972b18"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.959113 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8420-account-create-update-kv5xq" event={"ID":"31cf2934-c66e-40ca-81f5-26c0efff8bd4","Type":"ContainerStarted","Data":"9b441095a0fa9538e92cb5e9db91f5f13ab2ec20eab7cd50ef9d09d2d43b2b95"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.960493 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5l8t7" event={"ID":"492f33ff-82d7-4355-a412-faf4e879a228","Type":"ContainerStarted","Data":"fc54392cb0e83a068b04c558619de55802956a680a0b25014ca31ee6e01daef8"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.963731 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4d3f-account-create-update-q7fvm" event={"ID":"8bb498b0-229a-430a-8fb9-4311f3c7cd88","Type":"ContainerStarted","Data":"2a5ae0b4db2a15a1ed057e63af73bcd1c1a7cffc2bb0ddc0f3dbc39f84046c12"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.963769 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4d3f-account-create-update-q7fvm" event={"ID":"8bb498b0-229a-430a-8fb9-4311f3c7cd88","Type":"ContainerStarted","Data":"4698c30684fdb44bc3bf80027e394cd60d80dc8e2302dd4af20dc6a805c6edac"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.964192 5034 generic.go:334] "Generic (PLEG): container finished" podID="a1b470b5-b9ec-4d92-8965-9c0be5366721" containerID="fe32f24daade07e4bffe647a0b7d4e77ff03fd015a4876330d971f0844a9fc2b" exitCode=0 Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.964249 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c82dz" event={"ID":"a1b470b5-b9ec-4d92-8965-9c0be5366721","Type":"ContainerDied","Data":"fe32f24daade07e4bffe647a0b7d4e77ff03fd015a4876330d971f0844a9fc2b"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.964278 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c82dz" event={"ID":"a1b470b5-b9ec-4d92-8965-9c0be5366721","Type":"ContainerStarted","Data":"0ee2acff2ab39df88c71e8f8514708f3b0b7cd2586df37d8f478f4927564d139"} Jan 05 22:11:18 crc kubenswrapper[5034]: I0105 22:11:18.992141 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-8wjbq" podStartSLOduration=2.479235011 podStartE2EDuration="7.992113427s" podCreationTimestamp="2026-01-05 22:11:11 +0000 UTC" firstStartedPulling="2026-01-05 22:11:11.997786738 +0000 UTC m=+1164.369786177" lastFinishedPulling="2026-01-05 22:11:17.510665154 +0000 UTC m=+1169.882664593" observedRunningTime="2026-01-05 22:11:18.985537911 +0000 UTC m=+1171.357537350" watchObservedRunningTime="2026-01-05 22:11:18.992113427 +0000 UTC m=+1171.364112866" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.130152 5034 scope.go:117] "RemoveContainer" containerID="3dbd50eef769e0db2dc32386f7c8ebfc702facef92f06a9ed6a138fc721a3e21" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.135054 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-zfqq7"] Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.143335 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-zfqq7"] Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.679951 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4gbcl" podUID="8174d3dc-0931-484a-850f-3649234ef9fc" containerName="ovn-controller" probeResult="failure" output=< Jan 05 22:11:19 crc kubenswrapper[5034]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 05 22:11:19 crc kubenswrapper[5034]: > Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.697800 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.713351 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.848468 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79846781-e528-4b43-aacd-cbc32085ca10" path="/var/lib/kubelet/pods/79846781-e528-4b43-aacd-cbc32085ca10/volumes" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.849383 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da683ba3-7b16-4adc-9eb1-4a986a53e8ac" path="/var/lib/kubelet/pods/da683ba3-7b16-4adc-9eb1-4a986a53e8ac/volumes" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.952046 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4gbcl-config-5vvld"] Jan 05 22:11:19 crc kubenswrapper[5034]: E0105 22:11:19.952606 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79846781-e528-4b43-aacd-cbc32085ca10" containerName="init" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.952625 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="79846781-e528-4b43-aacd-cbc32085ca10" containerName="init" Jan 05 22:11:19 crc kubenswrapper[5034]: E0105 22:11:19.952654 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79846781-e528-4b43-aacd-cbc32085ca10" containerName="dnsmasq-dns" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.952661 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="79846781-e528-4b43-aacd-cbc32085ca10" containerName="dnsmasq-dns" Jan 05 22:11:19 crc kubenswrapper[5034]: E0105 22:11:19.952681 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da683ba3-7b16-4adc-9eb1-4a986a53e8ac" containerName="mariadb-account-create-update" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.952688 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="da683ba3-7b16-4adc-9eb1-4a986a53e8ac" containerName="mariadb-account-create-update" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.952858 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="da683ba3-7b16-4adc-9eb1-4a986a53e8ac" containerName="mariadb-account-create-update" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.952872 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="79846781-e528-4b43-aacd-cbc32085ca10" containerName="dnsmasq-dns" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.953571 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.956946 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.969914 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4gbcl-config-5vvld"] Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.993146 5034 generic.go:334] "Generic (PLEG): container finished" podID="8bb498b0-229a-430a-8fb9-4311f3c7cd88" containerID="2a5ae0b4db2a15a1ed057e63af73bcd1c1a7cffc2bb0ddc0f3dbc39f84046c12" exitCode=0 Jan 05 22:11:19 crc kubenswrapper[5034]: I0105 22:11:19.993260 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4d3f-account-create-update-q7fvm" event={"ID":"8bb498b0-229a-430a-8fb9-4311f3c7cd88","Type":"ContainerDied","Data":"2a5ae0b4db2a15a1ed057e63af73bcd1c1a7cffc2bb0ddc0f3dbc39f84046c12"} Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.004392 5034 generic.go:334] "Generic (PLEG): container finished" podID="31cf2934-c66e-40ca-81f5-26c0efff8bd4" containerID="9dfbea345804939d23848ac8f2b0cb84a5be4b7898cb0b1bd0d64e7e30972b18" exitCode=0 Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.004647 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8420-account-create-update-kv5xq" event={"ID":"31cf2934-c66e-40ca-81f5-26c0efff8bd4","Type":"ContainerDied","Data":"9dfbea345804939d23848ac8f2b0cb84a5be4b7898cb0b1bd0d64e7e30972b18"} Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.042194 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.042280 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-log-ovn\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.042306 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-additional-scripts\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.042336 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run-ovn\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.042373 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-scripts\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.042496 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhjwd\" (UniqueName: \"kubernetes.io/projected/96889f24-6ca9-4630-955a-d0167deea19c-kube-api-access-zhjwd\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.144356 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.144467 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-log-ovn\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.144499 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-additional-scripts\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.144528 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run-ovn\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.144565 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-scripts\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.144599 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhjwd\" (UniqueName: \"kubernetes.io/projected/96889f24-6ca9-4630-955a-d0167deea19c-kube-api-access-zhjwd\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.146042 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.147042 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-log-ovn\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.147210 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run-ovn\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.149085 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-additional-scripts\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.150189 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-scripts\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.186233 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhjwd\" (UniqueName: \"kubernetes.io/projected/96889f24-6ca9-4630-955a-d0167deea19c-kube-api-access-zhjwd\") pod \"ovn-controller-4gbcl-config-5vvld\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.293110 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.499216 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d3f-account-create-update-q7fvm" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.636596 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c82dz" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.647619 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8420-account-create-update-kv5xq" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.652113 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb498b0-229a-430a-8fb9-4311f3c7cd88-operator-scripts\") pod \"8bb498b0-229a-430a-8fb9-4311f3c7cd88\" (UID: \"8bb498b0-229a-430a-8fb9-4311f3c7cd88\") " Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.652324 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jrxf\" (UniqueName: \"kubernetes.io/projected/8bb498b0-229a-430a-8fb9-4311f3c7cd88-kube-api-access-2jrxf\") pod \"8bb498b0-229a-430a-8fb9-4311f3c7cd88\" (UID: \"8bb498b0-229a-430a-8fb9-4311f3c7cd88\") " Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.654465 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb498b0-229a-430a-8fb9-4311f3c7cd88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8bb498b0-229a-430a-8fb9-4311f3c7cd88" (UID: "8bb498b0-229a-430a-8fb9-4311f3c7cd88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.663415 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb498b0-229a-430a-8fb9-4311f3c7cd88-kube-api-access-2jrxf" (OuterVolumeSpecName: "kube-api-access-2jrxf") pod "8bb498b0-229a-430a-8fb9-4311f3c7cd88" (UID: "8bb498b0-229a-430a-8fb9-4311f3c7cd88"). InnerVolumeSpecName "kube-api-access-2jrxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.674448 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2xs4w" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.754891 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ndtk\" (UniqueName: \"kubernetes.io/projected/31cf2934-c66e-40ca-81f5-26c0efff8bd4-kube-api-access-5ndtk\") pod \"31cf2934-c66e-40ca-81f5-26c0efff8bd4\" (UID: \"31cf2934-c66e-40ca-81f5-26c0efff8bd4\") " Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.754955 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lchr\" (UniqueName: \"kubernetes.io/projected/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-kube-api-access-6lchr\") pod \"2bc6c217-9ff1-47b6-a60d-9029e501d9e0\" (UID: \"2bc6c217-9ff1-47b6-a60d-9029e501d9e0\") " Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.754987 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31cf2934-c66e-40ca-81f5-26c0efff8bd4-operator-scripts\") pod \"31cf2934-c66e-40ca-81f5-26c0efff8bd4\" (UID: \"31cf2934-c66e-40ca-81f5-26c0efff8bd4\") " Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.755103 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1b470b5-b9ec-4d92-8965-9c0be5366721-operator-scripts\") pod \"a1b470b5-b9ec-4d92-8965-9c0be5366721\" (UID: \"a1b470b5-b9ec-4d92-8965-9c0be5366721\") " Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.755168 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhbhz\" (UniqueName: \"kubernetes.io/projected/a1b470b5-b9ec-4d92-8965-9c0be5366721-kube-api-access-nhbhz\") pod \"a1b470b5-b9ec-4d92-8965-9c0be5366721\" (UID: \"a1b470b5-b9ec-4d92-8965-9c0be5366721\") " Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.755292 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-operator-scripts\") pod \"2bc6c217-9ff1-47b6-a60d-9029e501d9e0\" (UID: \"2bc6c217-9ff1-47b6-a60d-9029e501d9e0\") " Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.755510 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31cf2934-c66e-40ca-81f5-26c0efff8bd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31cf2934-c66e-40ca-81f5-26c0efff8bd4" (UID: "31cf2934-c66e-40ca-81f5-26c0efff8bd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.755722 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2bc6c217-9ff1-47b6-a60d-9029e501d9e0" (UID: "2bc6c217-9ff1-47b6-a60d-9029e501d9e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.755775 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jrxf\" (UniqueName: \"kubernetes.io/projected/8bb498b0-229a-430a-8fb9-4311f3c7cd88-kube-api-access-2jrxf\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.755793 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31cf2934-c66e-40ca-81f5-26c0efff8bd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.755804 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb498b0-229a-430a-8fb9-4311f3c7cd88-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.755812 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1b470b5-b9ec-4d92-8965-9c0be5366721-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1b470b5-b9ec-4d92-8965-9c0be5366721" (UID: "a1b470b5-b9ec-4d92-8965-9c0be5366721"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.758730 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b470b5-b9ec-4d92-8965-9c0be5366721-kube-api-access-nhbhz" (OuterVolumeSpecName: "kube-api-access-nhbhz") pod "a1b470b5-b9ec-4d92-8965-9c0be5366721" (UID: "a1b470b5-b9ec-4d92-8965-9c0be5366721"). InnerVolumeSpecName "kube-api-access-nhbhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.759849 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-kube-api-access-6lchr" (OuterVolumeSpecName: "kube-api-access-6lchr") pod "2bc6c217-9ff1-47b6-a60d-9029e501d9e0" (UID: "2bc6c217-9ff1-47b6-a60d-9029e501d9e0"). InnerVolumeSpecName "kube-api-access-6lchr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.760027 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31cf2934-c66e-40ca-81f5-26c0efff8bd4-kube-api-access-5ndtk" (OuterVolumeSpecName: "kube-api-access-5ndtk") pod "31cf2934-c66e-40ca-81f5-26c0efff8bd4" (UID: "31cf2934-c66e-40ca-81f5-26c0efff8bd4"). InnerVolumeSpecName "kube-api-access-5ndtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.857120 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1b470b5-b9ec-4d92-8965-9c0be5366721-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.857465 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhbhz\" (UniqueName: \"kubernetes.io/projected/a1b470b5-b9ec-4d92-8965-9c0be5366721-kube-api-access-nhbhz\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.857482 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.857495 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ndtk\" (UniqueName: \"kubernetes.io/projected/31cf2934-c66e-40ca-81f5-26c0efff8bd4-kube-api-access-5ndtk\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.857508 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lchr\" (UniqueName: \"kubernetes.io/projected/2bc6c217-9ff1-47b6-a60d-9029e501d9e0-kube-api-access-6lchr\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:20 crc kubenswrapper[5034]: I0105 22:11:20.959736 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4gbcl-config-5vvld"] Jan 05 22:11:20 crc kubenswrapper[5034]: W0105 22:11:20.967714 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96889f24_6ca9_4630_955a_d0167deea19c.slice/crio-c39861334cef9c6d8b2ac64e985907db667f2596e8d33b91daadd6070d5f6429 WatchSource:0}: Error finding container c39861334cef9c6d8b2ac64e985907db667f2596e8d33b91daadd6070d5f6429: Status 404 returned error can't find the container with id c39861334cef9c6d8b2ac64e985907db667f2596e8d33b91daadd6070d5f6429 Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.017625 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2xs4w" event={"ID":"2bc6c217-9ff1-47b6-a60d-9029e501d9e0","Type":"ContainerDied","Data":"5012ba47ec087e36edf29b85b6d9ab7a1ad6771df617542fce8800acca13fdda"} Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.017679 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5012ba47ec087e36edf29b85b6d9ab7a1ad6771df617542fce8800acca13fdda" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.017734 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2xs4w" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.019311 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8420-account-create-update-kv5xq" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.019875 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8420-account-create-update-kv5xq" event={"ID":"31cf2934-c66e-40ca-81f5-26c0efff8bd4","Type":"ContainerDied","Data":"9b441095a0fa9538e92cb5e9db91f5f13ab2ec20eab7cd50ef9d09d2d43b2b95"} Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.019921 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b441095a0fa9538e92cb5e9db91f5f13ab2ec20eab7cd50ef9d09d2d43b2b95" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.021610 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d3f-account-create-update-q7fvm" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.021620 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4d3f-account-create-update-q7fvm" event={"ID":"8bb498b0-229a-430a-8fb9-4311f3c7cd88","Type":"ContainerDied","Data":"4698c30684fdb44bc3bf80027e394cd60d80dc8e2302dd4af20dc6a805c6edac"} Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.021663 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4698c30684fdb44bc3bf80027e394cd60d80dc8e2302dd4af20dc6a805c6edac" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.023708 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gbcl-config-5vvld" event={"ID":"96889f24-6ca9-4630-955a-d0167deea19c","Type":"ContainerStarted","Data":"c39861334cef9c6d8b2ac64e985907db667f2596e8d33b91daadd6070d5f6429"} Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.025042 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c82dz" event={"ID":"a1b470b5-b9ec-4d92-8965-9c0be5366721","Type":"ContainerDied","Data":"0ee2acff2ab39df88c71e8f8514708f3b0b7cd2586df37d8f478f4927564d139"} Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.025425 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ee2acff2ab39df88c71e8f8514708f3b0b7cd2586df37d8f478f4927564d139" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.025466 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c82dz" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.446266 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vpxgp"] Jan 05 22:11:21 crc kubenswrapper[5034]: E0105 22:11:21.446883 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb498b0-229a-430a-8fb9-4311f3c7cd88" containerName="mariadb-account-create-update" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.446895 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb498b0-229a-430a-8fb9-4311f3c7cd88" containerName="mariadb-account-create-update" Jan 05 22:11:21 crc kubenswrapper[5034]: E0105 22:11:21.446923 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31cf2934-c66e-40ca-81f5-26c0efff8bd4" containerName="mariadb-account-create-update" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.446929 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="31cf2934-c66e-40ca-81f5-26c0efff8bd4" containerName="mariadb-account-create-update" Jan 05 22:11:21 crc kubenswrapper[5034]: E0105 22:11:21.446937 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1b470b5-b9ec-4d92-8965-9c0be5366721" containerName="mariadb-database-create" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.446944 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b470b5-b9ec-4d92-8965-9c0be5366721" containerName="mariadb-database-create" Jan 05 22:11:21 crc kubenswrapper[5034]: E0105 22:11:21.446953 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bc6c217-9ff1-47b6-a60d-9029e501d9e0" containerName="mariadb-database-create" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.446959 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bc6c217-9ff1-47b6-a60d-9029e501d9e0" containerName="mariadb-database-create" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.447126 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bc6c217-9ff1-47b6-a60d-9029e501d9e0" containerName="mariadb-database-create" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.447140 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="31cf2934-c66e-40ca-81f5-26c0efff8bd4" containerName="mariadb-account-create-update" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.447147 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1b470b5-b9ec-4d92-8965-9c0be5366721" containerName="mariadb-database-create" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.447159 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb498b0-229a-430a-8fb9-4311f3c7cd88" containerName="mariadb-account-create-update" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.447666 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vpxgp" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.456786 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vpxgp"] Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.460804 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.576479 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-operator-scripts\") pod \"root-account-create-update-vpxgp\" (UID: \"b04ba51f-04b5-4bce-b9e0-9219d7c116b7\") " pod="openstack/root-account-create-update-vpxgp" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.576570 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gtw2\" (UniqueName: \"kubernetes.io/projected/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-kube-api-access-7gtw2\") pod \"root-account-create-update-vpxgp\" (UID: \"b04ba51f-04b5-4bce-b9e0-9219d7c116b7\") " pod="openstack/root-account-create-update-vpxgp" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.678443 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-operator-scripts\") pod \"root-account-create-update-vpxgp\" (UID: \"b04ba51f-04b5-4bce-b9e0-9219d7c116b7\") " pod="openstack/root-account-create-update-vpxgp" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.678718 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gtw2\" (UniqueName: \"kubernetes.io/projected/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-kube-api-access-7gtw2\") pod \"root-account-create-update-vpxgp\" (UID: \"b04ba51f-04b5-4bce-b9e0-9219d7c116b7\") " pod="openstack/root-account-create-update-vpxgp" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.680028 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-operator-scripts\") pod \"root-account-create-update-vpxgp\" (UID: \"b04ba51f-04b5-4bce-b9e0-9219d7c116b7\") " pod="openstack/root-account-create-update-vpxgp" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.719298 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gtw2\" (UniqueName: \"kubernetes.io/projected/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-kube-api-access-7gtw2\") pod \"root-account-create-update-vpxgp\" (UID: \"b04ba51f-04b5-4bce-b9e0-9219d7c116b7\") " pod="openstack/root-account-create-update-vpxgp" Jan 05 22:11:21 crc kubenswrapper[5034]: I0105 22:11:21.764787 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vpxgp" Jan 05 22:11:22 crc kubenswrapper[5034]: I0105 22:11:22.254494 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vpxgp"] Jan 05 22:11:22 crc kubenswrapper[5034]: I0105 22:11:22.562870 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757dc6fff9-zfqq7" podUID="79846781-e528-4b43-aacd-cbc32085ca10" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Jan 05 22:11:22 crc kubenswrapper[5034]: E0105 22:11:22.797262 5034 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb04ba51f_04b5_4bce_b9e0_9219d7c116b7.slice/crio-2950e5f7f98d9c07a5939a1bb838fe143b1d2fc7594ef95e69cabd81559d8245.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb04ba51f_04b5_4bce_b9e0_9219d7c116b7.slice/crio-conmon-2950e5f7f98d9c07a5939a1bb838fe143b1d2fc7594ef95e69cabd81559d8245.scope\": RecentStats: unable to find data in memory cache]" Jan 05 22:11:22 crc kubenswrapper[5034]: I0105 22:11:22.903664 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:22 crc kubenswrapper[5034]: E0105 22:11:22.903944 5034 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 22:11:22 crc kubenswrapper[5034]: E0105 22:11:22.904101 5034 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 22:11:22 crc kubenswrapper[5034]: E0105 22:11:22.904197 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift podName:4402dece-5e7d-41e8-87e3-54ca201e2c52 nodeName:}" failed. No retries permitted until 2026-01-05 22:11:38.904166328 +0000 UTC m=+1191.276165767 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift") pod "swift-storage-0" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52") : configmap "swift-ring-files" not found Jan 05 22:11:23 crc kubenswrapper[5034]: I0105 22:11:23.041369 5034 generic.go:334] "Generic (PLEG): container finished" podID="b04ba51f-04b5-4bce-b9e0-9219d7c116b7" containerID="2950e5f7f98d9c07a5939a1bb838fe143b1d2fc7594ef95e69cabd81559d8245" exitCode=0 Jan 05 22:11:23 crc kubenswrapper[5034]: I0105 22:11:23.041432 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vpxgp" event={"ID":"b04ba51f-04b5-4bce-b9e0-9219d7c116b7","Type":"ContainerDied","Data":"2950e5f7f98d9c07a5939a1bb838fe143b1d2fc7594ef95e69cabd81559d8245"} Jan 05 22:11:23 crc kubenswrapper[5034]: I0105 22:11:23.041469 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vpxgp" event={"ID":"b04ba51f-04b5-4bce-b9e0-9219d7c116b7","Type":"ContainerStarted","Data":"f2ad91ba91254a6c75373c93307ab1ee0f33edc0fa1a710eb9c2ef74fd850520"} Jan 05 22:11:23 crc kubenswrapper[5034]: I0105 22:11:23.043186 5034 generic.go:334] "Generic (PLEG): container finished" podID="96889f24-6ca9-4630-955a-d0167deea19c" containerID="14524d7362d1054c38b2b70a84f2c2ca21a579f59ba9729100777de4ce174f2a" exitCode=0 Jan 05 22:11:23 crc kubenswrapper[5034]: I0105 22:11:23.043216 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gbcl-config-5vvld" event={"ID":"96889f24-6ca9-4630-955a-d0167deea19c","Type":"ContainerDied","Data":"14524d7362d1054c38b2b70a84f2c2ca21a579f59ba9729100777de4ce174f2a"} Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.538214 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.546213 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vpxgp" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.636664 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run\") pod \"96889f24-6ca9-4630-955a-d0167deea19c\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.636724 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhjwd\" (UniqueName: \"kubernetes.io/projected/96889f24-6ca9-4630-955a-d0167deea19c-kube-api-access-zhjwd\") pod \"96889f24-6ca9-4630-955a-d0167deea19c\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.636784 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-additional-scripts\") pod \"96889f24-6ca9-4630-955a-d0167deea19c\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.636857 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-log-ovn\") pod \"96889f24-6ca9-4630-955a-d0167deea19c\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.636887 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-operator-scripts\") pod \"b04ba51f-04b5-4bce-b9e0-9219d7c116b7\" (UID: \"b04ba51f-04b5-4bce-b9e0-9219d7c116b7\") " Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.636973 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-scripts\") pod \"96889f24-6ca9-4630-955a-d0167deea19c\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.637012 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run-ovn\") pod \"96889f24-6ca9-4630-955a-d0167deea19c\" (UID: \"96889f24-6ca9-4630-955a-d0167deea19c\") " Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.637040 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gtw2\" (UniqueName: \"kubernetes.io/projected/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-kube-api-access-7gtw2\") pod \"b04ba51f-04b5-4bce-b9e0-9219d7c116b7\" (UID: \"b04ba51f-04b5-4bce-b9e0-9219d7c116b7\") " Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.637923 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "96889f24-6ca9-4630-955a-d0167deea19c" (UID: "96889f24-6ca9-4630-955a-d0167deea19c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.638306 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "96889f24-6ca9-4630-955a-d0167deea19c" (UID: "96889f24-6ca9-4630-955a-d0167deea19c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.638363 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b04ba51f-04b5-4bce-b9e0-9219d7c116b7" (UID: "b04ba51f-04b5-4bce-b9e0-9219d7c116b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.638357 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run" (OuterVolumeSpecName: "var-run") pod "96889f24-6ca9-4630-955a-d0167deea19c" (UID: "96889f24-6ca9-4630-955a-d0167deea19c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.639238 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "96889f24-6ca9-4630-955a-d0167deea19c" (UID: "96889f24-6ca9-4630-955a-d0167deea19c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.639440 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-scripts" (OuterVolumeSpecName: "scripts") pod "96889f24-6ca9-4630-955a-d0167deea19c" (UID: "96889f24-6ca9-4630-955a-d0167deea19c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.646370 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-kube-api-access-7gtw2" (OuterVolumeSpecName: "kube-api-access-7gtw2") pod "b04ba51f-04b5-4bce-b9e0-9219d7c116b7" (UID: "b04ba51f-04b5-4bce-b9e0-9219d7c116b7"). InnerVolumeSpecName "kube-api-access-7gtw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.646899 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96889f24-6ca9-4630-955a-d0167deea19c-kube-api-access-zhjwd" (OuterVolumeSpecName: "kube-api-access-zhjwd") pod "96889f24-6ca9-4630-955a-d0167deea19c" (UID: "96889f24-6ca9-4630-955a-d0167deea19c"). InnerVolumeSpecName "kube-api-access-zhjwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.680823 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4gbcl" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.739788 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gtw2\" (UniqueName: \"kubernetes.io/projected/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-kube-api-access-7gtw2\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.739823 5034 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.739833 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhjwd\" (UniqueName: \"kubernetes.io/projected/96889f24-6ca9-4630-955a-d0167deea19c-kube-api-access-zhjwd\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.739843 5034 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.739851 5034 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.739861 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b04ba51f-04b5-4bce-b9e0-9219d7c116b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.739869 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96889f24-6ca9-4630-955a-d0167deea19c-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:24 crc kubenswrapper[5034]: I0105 22:11:24.739878 5034 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/96889f24-6ca9-4630-955a-d0167deea19c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:25 crc kubenswrapper[5034]: I0105 22:11:25.086938 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gbcl-config-5vvld" Jan 05 22:11:25 crc kubenswrapper[5034]: I0105 22:11:25.087375 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gbcl-config-5vvld" event={"ID":"96889f24-6ca9-4630-955a-d0167deea19c","Type":"ContainerDied","Data":"c39861334cef9c6d8b2ac64e985907db667f2596e8d33b91daadd6070d5f6429"} Jan 05 22:11:25 crc kubenswrapper[5034]: I0105 22:11:25.088241 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c39861334cef9c6d8b2ac64e985907db667f2596e8d33b91daadd6070d5f6429" Jan 05 22:11:25 crc kubenswrapper[5034]: I0105 22:11:25.113879 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vpxgp" event={"ID":"b04ba51f-04b5-4bce-b9e0-9219d7c116b7","Type":"ContainerDied","Data":"f2ad91ba91254a6c75373c93307ab1ee0f33edc0fa1a710eb9c2ef74fd850520"} Jan 05 22:11:25 crc kubenswrapper[5034]: I0105 22:11:25.114237 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2ad91ba91254a6c75373c93307ab1ee0f33edc0fa1a710eb9c2ef74fd850520" Jan 05 22:11:25 crc kubenswrapper[5034]: I0105 22:11:25.114436 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vpxgp" Jan 05 22:11:25 crc kubenswrapper[5034]: I0105 22:11:25.637909 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4gbcl-config-5vvld"] Jan 05 22:11:25 crc kubenswrapper[5034]: I0105 22:11:25.647639 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4gbcl-config-5vvld"] Jan 05 22:11:25 crc kubenswrapper[5034]: I0105 22:11:25.862185 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96889f24-6ca9-4630-955a-d0167deea19c" path="/var/lib/kubelet/pods/96889f24-6ca9-4630-955a-d0167deea19c/volumes" Jan 05 22:11:26 crc kubenswrapper[5034]: I0105 22:11:26.130560 5034 generic.go:334] "Generic (PLEG): container finished" podID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" containerID="5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518" exitCode=0 Jan 05 22:11:26 crc kubenswrapper[5034]: I0105 22:11:26.130619 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a6b236-e04b-494a-a18e-5d1a8a5ae02a","Type":"ContainerDied","Data":"5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518"} Jan 05 22:11:26 crc kubenswrapper[5034]: I0105 22:11:26.135633 5034 generic.go:334] "Generic (PLEG): container finished" podID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" containerID="a9b0af71996b2f7b5cfc0164a2338f465cc5484f2c68ff42352cd8642afd9b56" exitCode=0 Jan 05 22:11:26 crc kubenswrapper[5034]: I0105 22:11:26.135736 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94526d3f-1e21-4eef-abb7-5cd05bfb1670","Type":"ContainerDied","Data":"a9b0af71996b2f7b5cfc0164a2338f465cc5484f2c68ff42352cd8642afd9b56"} Jan 05 22:11:27 crc kubenswrapper[5034]: I0105 22:11:27.802319 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vpxgp"] Jan 05 22:11:27 crc kubenswrapper[5034]: I0105 22:11:27.815843 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vpxgp"] Jan 05 22:11:27 crc kubenswrapper[5034]: I0105 22:11:27.848597 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b04ba51f-04b5-4bce-b9e0-9219d7c116b7" path="/var/lib/kubelet/pods/b04ba51f-04b5-4bce-b9e0-9219d7c116b7/volumes" Jan 05 22:11:28 crc kubenswrapper[5034]: I0105 22:11:28.158098 5034 generic.go:334] "Generic (PLEG): container finished" podID="5158186d-181d-498c-8eeb-c222566958f7" containerID="bfb7b48fb39b141b2930622dfff5f754765edc64fa5517e3d7f0bc67c49e0300" exitCode=0 Jan 05 22:11:28 crc kubenswrapper[5034]: I0105 22:11:28.158144 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8wjbq" event={"ID":"5158186d-181d-498c-8eeb-c222566958f7","Type":"ContainerDied","Data":"bfb7b48fb39b141b2930622dfff5f754765edc64fa5517e3d7f0bc67c49e0300"} Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.209659 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8wjbq" event={"ID":"5158186d-181d-498c-8eeb-c222566958f7","Type":"ContainerDied","Data":"0e323d6437034b1eb8526a8bf58b15973ad28a88e37e7ceecca443b72e468f61"} Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.210321 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e323d6437034b1eb8526a8bf58b15973ad28a88e37e7ceecca443b72e468f61" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.369386 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.432111 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5158186d-181d-498c-8eeb-c222566958f7-etc-swift\") pod \"5158186d-181d-498c-8eeb-c222566958f7\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.432398 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-ring-data-devices\") pod \"5158186d-181d-498c-8eeb-c222566958f7\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.432479 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-dispersionconf\") pod \"5158186d-181d-498c-8eeb-c222566958f7\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.432525 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-swiftconf\") pod \"5158186d-181d-498c-8eeb-c222566958f7\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.432576 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p56r5\" (UniqueName: \"kubernetes.io/projected/5158186d-181d-498c-8eeb-c222566958f7-kube-api-access-p56r5\") pod \"5158186d-181d-498c-8eeb-c222566958f7\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.432630 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-scripts\") pod \"5158186d-181d-498c-8eeb-c222566958f7\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.432665 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-combined-ca-bundle\") pod \"5158186d-181d-498c-8eeb-c222566958f7\" (UID: \"5158186d-181d-498c-8eeb-c222566958f7\") " Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.433130 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5158186d-181d-498c-8eeb-c222566958f7" (UID: "5158186d-181d-498c-8eeb-c222566958f7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.433363 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5158186d-181d-498c-8eeb-c222566958f7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5158186d-181d-498c-8eeb-c222566958f7" (UID: "5158186d-181d-498c-8eeb-c222566958f7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.435020 5034 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5158186d-181d-498c-8eeb-c222566958f7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.435063 5034 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.440985 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5158186d-181d-498c-8eeb-c222566958f7-kube-api-access-p56r5" (OuterVolumeSpecName: "kube-api-access-p56r5") pod "5158186d-181d-498c-8eeb-c222566958f7" (UID: "5158186d-181d-498c-8eeb-c222566958f7"). InnerVolumeSpecName "kube-api-access-p56r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.445443 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5158186d-181d-498c-8eeb-c222566958f7" (UID: "5158186d-181d-498c-8eeb-c222566958f7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.462844 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-scripts" (OuterVolumeSpecName: "scripts") pod "5158186d-181d-498c-8eeb-c222566958f7" (UID: "5158186d-181d-498c-8eeb-c222566958f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.471717 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5158186d-181d-498c-8eeb-c222566958f7" (UID: "5158186d-181d-498c-8eeb-c222566958f7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.486209 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5158186d-181d-498c-8eeb-c222566958f7" (UID: "5158186d-181d-498c-8eeb-c222566958f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.536540 5034 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.536577 5034 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.536591 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p56r5\" (UniqueName: \"kubernetes.io/projected/5158186d-181d-498c-8eeb-c222566958f7-kube-api-access-p56r5\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.536602 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158186d-181d-498c-8eeb-c222566958f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.536612 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158186d-181d-498c-8eeb-c222566958f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.830803 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hkgrk"] Jan 05 22:11:32 crc kubenswrapper[5034]: E0105 22:11:32.831167 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96889f24-6ca9-4630-955a-d0167deea19c" containerName="ovn-config" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.831181 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="96889f24-6ca9-4630-955a-d0167deea19c" containerName="ovn-config" Jan 05 22:11:32 crc kubenswrapper[5034]: E0105 22:11:32.831199 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04ba51f-04b5-4bce-b9e0-9219d7c116b7" containerName="mariadb-account-create-update" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.831205 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04ba51f-04b5-4bce-b9e0-9219d7c116b7" containerName="mariadb-account-create-update" Jan 05 22:11:32 crc kubenswrapper[5034]: E0105 22:11:32.831224 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5158186d-181d-498c-8eeb-c222566958f7" containerName="swift-ring-rebalance" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.831233 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5158186d-181d-498c-8eeb-c222566958f7" containerName="swift-ring-rebalance" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.831388 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04ba51f-04b5-4bce-b9e0-9219d7c116b7" containerName="mariadb-account-create-update" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.831402 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5158186d-181d-498c-8eeb-c222566958f7" containerName="swift-ring-rebalance" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.831408 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="96889f24-6ca9-4630-955a-d0167deea19c" containerName="ovn-config" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.831928 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hkgrk" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.835497 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.847016 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hkgrk"] Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.942381 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zxh\" (UniqueName: \"kubernetes.io/projected/1b6ce724-2b29-4249-ac22-c95de9c2bb14-kube-api-access-p8zxh\") pod \"root-account-create-update-hkgrk\" (UID: \"1b6ce724-2b29-4249-ac22-c95de9c2bb14\") " pod="openstack/root-account-create-update-hkgrk" Jan 05 22:11:32 crc kubenswrapper[5034]: I0105 22:11:32.942532 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b6ce724-2b29-4249-ac22-c95de9c2bb14-operator-scripts\") pod \"root-account-create-update-hkgrk\" (UID: \"1b6ce724-2b29-4249-ac22-c95de9c2bb14\") " pod="openstack/root-account-create-update-hkgrk" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.044408 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b6ce724-2b29-4249-ac22-c95de9c2bb14-operator-scripts\") pod \"root-account-create-update-hkgrk\" (UID: \"1b6ce724-2b29-4249-ac22-c95de9c2bb14\") " pod="openstack/root-account-create-update-hkgrk" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.045189 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8zxh\" (UniqueName: \"kubernetes.io/projected/1b6ce724-2b29-4249-ac22-c95de9c2bb14-kube-api-access-p8zxh\") pod \"root-account-create-update-hkgrk\" (UID: \"1b6ce724-2b29-4249-ac22-c95de9c2bb14\") " pod="openstack/root-account-create-update-hkgrk" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.045291 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b6ce724-2b29-4249-ac22-c95de9c2bb14-operator-scripts\") pod \"root-account-create-update-hkgrk\" (UID: \"1b6ce724-2b29-4249-ac22-c95de9c2bb14\") " pod="openstack/root-account-create-update-hkgrk" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.062576 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8zxh\" (UniqueName: \"kubernetes.io/projected/1b6ce724-2b29-4249-ac22-c95de9c2bb14-kube-api-access-p8zxh\") pod \"root-account-create-update-hkgrk\" (UID: \"1b6ce724-2b29-4249-ac22-c95de9c2bb14\") " pod="openstack/root-account-create-update-hkgrk" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.159590 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hkgrk" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.237843 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94526d3f-1e21-4eef-abb7-5cd05bfb1670","Type":"ContainerStarted","Data":"6142e99eab6f8d5fa2aa4392f035c3a6396193c921db5594487e88a07ec633b0"} Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.239408 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.239443 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5l8t7" event={"ID":"492f33ff-82d7-4355-a412-faf4e879a228","Type":"ContainerStarted","Data":"c53666cb18451ea45ec78f56963d247d0b365d45c797f66511f8ec7d56a3c013"} Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.241864 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a6b236-e04b-494a-a18e-5d1a8a5ae02a","Type":"ContainerStarted","Data":"2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052"} Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.242229 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.244789 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8wjbq" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.270551 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.878277568 podStartE2EDuration="1m24.2704499s" podCreationTimestamp="2026-01-05 22:10:09 +0000 UTC" firstStartedPulling="2026-01-05 22:10:11.090295178 +0000 UTC m=+1103.462294617" lastFinishedPulling="2026-01-05 22:10:51.48246751 +0000 UTC m=+1143.854466949" observedRunningTime="2026-01-05 22:11:33.261350272 +0000 UTC m=+1185.633349711" watchObservedRunningTime="2026-01-05 22:11:33.2704499 +0000 UTC m=+1185.642449339" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.296017 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371952.558805 podStartE2EDuration="1m24.295971043s" podCreationTimestamp="2026-01-05 22:10:09 +0000 UTC" firstStartedPulling="2026-01-05 22:10:12.285554303 +0000 UTC m=+1104.657553742" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:11:33.285309381 +0000 UTC m=+1185.657308820" watchObservedRunningTime="2026-01-05 22:11:33.295971043 +0000 UTC m=+1185.667970492" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.307887 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5l8t7" podStartSLOduration=4.562233576 podStartE2EDuration="18.307861089s" podCreationTimestamp="2026-01-05 22:11:15 +0000 UTC" firstStartedPulling="2026-01-05 22:11:18.520146298 +0000 UTC m=+1170.892145737" lastFinishedPulling="2026-01-05 22:11:32.265773811 +0000 UTC m=+1184.637773250" observedRunningTime="2026-01-05 22:11:33.302607281 +0000 UTC m=+1185.674606720" watchObservedRunningTime="2026-01-05 22:11:33.307861089 +0000 UTC m=+1185.679860528" Jan 05 22:11:33 crc kubenswrapper[5034]: I0105 22:11:33.696458 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hkgrk"] Jan 05 22:11:34 crc kubenswrapper[5034]: I0105 22:11:34.255549 5034 generic.go:334] "Generic (PLEG): container finished" podID="1b6ce724-2b29-4249-ac22-c95de9c2bb14" containerID="407d8a05bffd4abe9ad082589bcc9ea3f018e3dbb52b094de90c3bdb95dd7f60" exitCode=0 Jan 05 22:11:34 crc kubenswrapper[5034]: I0105 22:11:34.255606 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hkgrk" event={"ID":"1b6ce724-2b29-4249-ac22-c95de9c2bb14","Type":"ContainerDied","Data":"407d8a05bffd4abe9ad082589bcc9ea3f018e3dbb52b094de90c3bdb95dd7f60"} Jan 05 22:11:34 crc kubenswrapper[5034]: I0105 22:11:34.255920 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hkgrk" event={"ID":"1b6ce724-2b29-4249-ac22-c95de9c2bb14","Type":"ContainerStarted","Data":"f568292bd9bf17347c3e4dda9d261fb1eba7bd975b1e2a27d251029d571c0333"} Jan 05 22:11:35 crc kubenswrapper[5034]: I0105 22:11:35.585805 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hkgrk" Jan 05 22:11:35 crc kubenswrapper[5034]: I0105 22:11:35.714504 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8zxh\" (UniqueName: \"kubernetes.io/projected/1b6ce724-2b29-4249-ac22-c95de9c2bb14-kube-api-access-p8zxh\") pod \"1b6ce724-2b29-4249-ac22-c95de9c2bb14\" (UID: \"1b6ce724-2b29-4249-ac22-c95de9c2bb14\") " Jan 05 22:11:35 crc kubenswrapper[5034]: I0105 22:11:35.714703 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b6ce724-2b29-4249-ac22-c95de9c2bb14-operator-scripts\") pod \"1b6ce724-2b29-4249-ac22-c95de9c2bb14\" (UID: \"1b6ce724-2b29-4249-ac22-c95de9c2bb14\") " Jan 05 22:11:35 crc kubenswrapper[5034]: I0105 22:11:35.715770 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b6ce724-2b29-4249-ac22-c95de9c2bb14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b6ce724-2b29-4249-ac22-c95de9c2bb14" (UID: "1b6ce724-2b29-4249-ac22-c95de9c2bb14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:35 crc kubenswrapper[5034]: I0105 22:11:35.722061 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b6ce724-2b29-4249-ac22-c95de9c2bb14-kube-api-access-p8zxh" (OuterVolumeSpecName: "kube-api-access-p8zxh") pod "1b6ce724-2b29-4249-ac22-c95de9c2bb14" (UID: "1b6ce724-2b29-4249-ac22-c95de9c2bb14"). InnerVolumeSpecName "kube-api-access-p8zxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:35 crc kubenswrapper[5034]: I0105 22:11:35.817432 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b6ce724-2b29-4249-ac22-c95de9c2bb14-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:35 crc kubenswrapper[5034]: I0105 22:11:35.817502 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8zxh\" (UniqueName: \"kubernetes.io/projected/1b6ce724-2b29-4249-ac22-c95de9c2bb14-kube-api-access-p8zxh\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:36 crc kubenswrapper[5034]: I0105 22:11:36.272979 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hkgrk" event={"ID":"1b6ce724-2b29-4249-ac22-c95de9c2bb14","Type":"ContainerDied","Data":"f568292bd9bf17347c3e4dda9d261fb1eba7bd975b1e2a27d251029d571c0333"} Jan 05 22:11:36 crc kubenswrapper[5034]: I0105 22:11:36.273043 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f568292bd9bf17347c3e4dda9d261fb1eba7bd975b1e2a27d251029d571c0333" Jan 05 22:11:36 crc kubenswrapper[5034]: I0105 22:11:36.273163 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hkgrk" Jan 05 22:11:38 crc kubenswrapper[5034]: I0105 22:11:38.993153 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:39 crc kubenswrapper[5034]: I0105 22:11:39.001432 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") pod \"swift-storage-0\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " pod="openstack/swift-storage-0" Jan 05 22:11:39 crc kubenswrapper[5034]: I0105 22:11:39.251745 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 05 22:11:39 crc kubenswrapper[5034]: I0105 22:11:39.299392 5034 generic.go:334] "Generic (PLEG): container finished" podID="492f33ff-82d7-4355-a412-faf4e879a228" containerID="c53666cb18451ea45ec78f56963d247d0b365d45c797f66511f8ec7d56a3c013" exitCode=0 Jan 05 22:11:39 crc kubenswrapper[5034]: I0105 22:11:39.299443 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5l8t7" event={"ID":"492f33ff-82d7-4355-a412-faf4e879a228","Type":"ContainerDied","Data":"c53666cb18451ea45ec78f56963d247d0b365d45c797f66511f8ec7d56a3c013"} Jan 05 22:11:39 crc kubenswrapper[5034]: I0105 22:11:39.885455 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.307482 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"a2ceaa2f7e504c25c0515bb4a8bb0ee2eb1a0402c04c8422117f5c4fd96a705e"} Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.768798 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.823918 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-config-data\") pod \"492f33ff-82d7-4355-a412-faf4e879a228\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.823984 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pb25\" (UniqueName: \"kubernetes.io/projected/492f33ff-82d7-4355-a412-faf4e879a228-kube-api-access-4pb25\") pod \"492f33ff-82d7-4355-a412-faf4e879a228\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.824068 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-combined-ca-bundle\") pod \"492f33ff-82d7-4355-a412-faf4e879a228\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.824193 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-db-sync-config-data\") pod \"492f33ff-82d7-4355-a412-faf4e879a228\" (UID: \"492f33ff-82d7-4355-a412-faf4e879a228\") " Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.835275 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492f33ff-82d7-4355-a412-faf4e879a228-kube-api-access-4pb25" (OuterVolumeSpecName: "kube-api-access-4pb25") pod "492f33ff-82d7-4355-a412-faf4e879a228" (UID: "492f33ff-82d7-4355-a412-faf4e879a228"). InnerVolumeSpecName "kube-api-access-4pb25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.844430 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "492f33ff-82d7-4355-a412-faf4e879a228" (UID: "492f33ff-82d7-4355-a412-faf4e879a228"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.851789 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "492f33ff-82d7-4355-a412-faf4e879a228" (UID: "492f33ff-82d7-4355-a412-faf4e879a228"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.871665 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-config-data" (OuterVolumeSpecName: "config-data") pod "492f33ff-82d7-4355-a412-faf4e879a228" (UID: "492f33ff-82d7-4355-a412-faf4e879a228"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.927324 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.927367 5034 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.927380 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492f33ff-82d7-4355-a412-faf4e879a228-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:40 crc kubenswrapper[5034]: I0105 22:11:40.927393 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pb25\" (UniqueName: \"kubernetes.io/projected/492f33ff-82d7-4355-a412-faf4e879a228-kube-api-access-4pb25\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:41 crc kubenswrapper[5034]: I0105 22:11:41.318121 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5l8t7" event={"ID":"492f33ff-82d7-4355-a412-faf4e879a228","Type":"ContainerDied","Data":"fc54392cb0e83a068b04c558619de55802956a680a0b25014ca31ee6e01daef8"} Jan 05 22:11:41 crc kubenswrapper[5034]: I0105 22:11:41.318623 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc54392cb0e83a068b04c558619de55802956a680a0b25014ca31ee6e01daef8" Jan 05 22:11:41 crc kubenswrapper[5034]: I0105 22:11:41.318182 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5l8t7" Jan 05 22:11:41 crc kubenswrapper[5034]: I0105 22:11:41.899859 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-p4hrh"] Jan 05 22:11:41 crc kubenswrapper[5034]: E0105 22:11:41.901373 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6ce724-2b29-4249-ac22-c95de9c2bb14" containerName="mariadb-account-create-update" Jan 05 22:11:41 crc kubenswrapper[5034]: I0105 22:11:41.901404 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6ce724-2b29-4249-ac22-c95de9c2bb14" containerName="mariadb-account-create-update" Jan 05 22:11:41 crc kubenswrapper[5034]: E0105 22:11:41.901418 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492f33ff-82d7-4355-a412-faf4e879a228" containerName="glance-db-sync" Jan 05 22:11:41 crc kubenswrapper[5034]: I0105 22:11:41.901427 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="492f33ff-82d7-4355-a412-faf4e879a228" containerName="glance-db-sync" Jan 05 22:11:41 crc kubenswrapper[5034]: I0105 22:11:41.901867 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="492f33ff-82d7-4355-a412-faf4e879a228" containerName="glance-db-sync" Jan 05 22:11:41 crc kubenswrapper[5034]: I0105 22:11:41.901906 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6ce724-2b29-4249-ac22-c95de9c2bb14" containerName="mariadb-account-create-update" Jan 05 22:11:41 crc kubenswrapper[5034]: I0105 22:11:41.903480 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:41 crc kubenswrapper[5034]: I0105 22:11:41.914498 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-p4hrh"] Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.001464 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-config\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.001648 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-dns-svc\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.001879 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-sb\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.001995 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b688b\" (UniqueName: \"kubernetes.io/projected/994faa91-11c3-465e-9d3f-2bdbf2b84328-kube-api-access-b688b\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.002217 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-nb\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.104117 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-nb\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.104607 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-config\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.104632 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-dns-svc\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.104701 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-sb\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.104733 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b688b\" (UniqueName: \"kubernetes.io/projected/994faa91-11c3-465e-9d3f-2bdbf2b84328-kube-api-access-b688b\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.105315 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-nb\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.105771 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-dns-svc\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.105930 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-sb\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.106066 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-config\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.130002 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b688b\" (UniqueName: \"kubernetes.io/projected/994faa91-11c3-465e-9d3f-2bdbf2b84328-kube-api-access-b688b\") pod \"dnsmasq-dns-79778dbd8c-p4hrh\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.227711 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.331120 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac"} Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.331177 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370"} Jan 05 22:11:42 crc kubenswrapper[5034]: I0105 22:11:42.331191 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08"} Jan 05 22:11:43 crc kubenswrapper[5034]: I0105 22:11:43.348441 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9"} Jan 05 22:11:43 crc kubenswrapper[5034]: I0105 22:11:43.824789 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-p4hrh"] Jan 05 22:11:44 crc kubenswrapper[5034]: I0105 22:11:44.360216 5034 generic.go:334] "Generic (PLEG): container finished" podID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerID="ac3c8e4758503e92475e2d26a4c03fea79c0c64b762eb51bc37aa6cb46466081" exitCode=0 Jan 05 22:11:44 crc kubenswrapper[5034]: I0105 22:11:44.360290 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" event={"ID":"994faa91-11c3-465e-9d3f-2bdbf2b84328","Type":"ContainerDied","Data":"ac3c8e4758503e92475e2d26a4c03fea79c0c64b762eb51bc37aa6cb46466081"} Jan 05 22:11:44 crc kubenswrapper[5034]: I0105 22:11:44.360953 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" event={"ID":"994faa91-11c3-465e-9d3f-2bdbf2b84328","Type":"ContainerStarted","Data":"9523796f50b6b1cf713b3a2a765163fc19514f0fbe2319277613c9e2cd047dd5"} Jan 05 22:11:44 crc kubenswrapper[5034]: I0105 22:11:44.412266 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4"} Jan 05 22:11:44 crc kubenswrapper[5034]: I0105 22:11:44.412382 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542"} Jan 05 22:11:45 crc kubenswrapper[5034]: I0105 22:11:45.423715 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" event={"ID":"994faa91-11c3-465e-9d3f-2bdbf2b84328","Type":"ContainerStarted","Data":"f39e48f2802b2ac2efe4275c7ddb8cb739d20126bfdde36345bd13a09f2eae54"} Jan 05 22:11:45 crc kubenswrapper[5034]: I0105 22:11:45.424128 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:45 crc kubenswrapper[5034]: I0105 22:11:45.430060 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa"} Jan 05 22:11:45 crc kubenswrapper[5034]: I0105 22:11:45.430145 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03"} Jan 05 22:11:45 crc kubenswrapper[5034]: I0105 22:11:45.445207 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" podStartSLOduration=4.445179826 podStartE2EDuration="4.445179826s" podCreationTimestamp="2026-01-05 22:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:11:45.439776863 +0000 UTC m=+1197.811776312" watchObservedRunningTime="2026-01-05 22:11:45.445179826 +0000 UTC m=+1197.817179265" Jan 05 22:11:50 crc kubenswrapper[5034]: I0105 22:11:50.637738 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.003404 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-92frq"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.004887 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-92frq" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.013167 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-92frq"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.100206 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-6fzvz"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.106471 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6fzvz" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.114796 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6fzvz"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.196515 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9182-account-create-update-w78zx"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.197733 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9182-account-create-update-w78zx" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.200095 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2741770-25b1-43ea-878d-f57b57e65fac-operator-scripts\") pod \"cinder-db-create-92frq\" (UID: \"f2741770-25b1-43ea-878d-f57b57e65fac\") " pod="openstack/cinder-db-create-92frq" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.200215 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwwv2\" (UniqueName: \"kubernetes.io/projected/f2741770-25b1-43ea-878d-f57b57e65fac-kube-api-access-kwwv2\") pod \"cinder-db-create-92frq\" (UID: \"f2741770-25b1-43ea-878d-f57b57e65fac\") " pod="openstack/cinder-db-create-92frq" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.200310 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.216836 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9182-account-create-update-w78zx"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.299960 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-l5x88"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.301398 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.301717 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thkz\" (UniqueName: \"kubernetes.io/projected/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-kube-api-access-2thkz\") pod \"barbican-db-create-6fzvz\" (UID: \"b08a0a0a-78db-4b23-b4bd-15c14d70c14a\") " pod="openstack/barbican-db-create-6fzvz" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.301795 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwwv2\" (UniqueName: \"kubernetes.io/projected/f2741770-25b1-43ea-878d-f57b57e65fac-kube-api-access-kwwv2\") pod \"cinder-db-create-92frq\" (UID: \"f2741770-25b1-43ea-878d-f57b57e65fac\") " pod="openstack/cinder-db-create-92frq" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.301853 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2741770-25b1-43ea-878d-f57b57e65fac-operator-scripts\") pod \"cinder-db-create-92frq\" (UID: \"f2741770-25b1-43ea-878d-f57b57e65fac\") " pod="openstack/cinder-db-create-92frq" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.301910 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcms2\" (UniqueName: \"kubernetes.io/projected/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-kube-api-access-bcms2\") pod \"barbican-9182-account-create-update-w78zx\" (UID: \"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25\") " pod="openstack/barbican-9182-account-create-update-w78zx" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.301955 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-operator-scripts\") pod \"barbican-db-create-6fzvz\" (UID: \"b08a0a0a-78db-4b23-b4bd-15c14d70c14a\") " pod="openstack/barbican-db-create-6fzvz" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.301991 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-operator-scripts\") pod \"barbican-9182-account-create-update-w78zx\" (UID: \"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25\") " pod="openstack/barbican-9182-account-create-update-w78zx" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.303040 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2741770-25b1-43ea-878d-f57b57e65fac-operator-scripts\") pod \"cinder-db-create-92frq\" (UID: \"f2741770-25b1-43ea-878d-f57b57e65fac\") " pod="openstack/cinder-db-create-92frq" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.312703 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.312954 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.313182 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-62wxh" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.313365 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.318687 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c4ab-account-create-update-gmflk"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.320488 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4ab-account-create-update-gmflk" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.323185 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.330120 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c4ab-account-create-update-gmflk"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.339627 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-l5x88"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.370966 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwwv2\" (UniqueName: \"kubernetes.io/projected/f2741770-25b1-43ea-878d-f57b57e65fac-kube-api-access-kwwv2\") pod \"cinder-db-create-92frq\" (UID: \"f2741770-25b1-43ea-878d-f57b57e65fac\") " pod="openstack/cinder-db-create-92frq" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.403951 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcms2\" (UniqueName: \"kubernetes.io/projected/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-kube-api-access-bcms2\") pod \"barbican-9182-account-create-update-w78zx\" (UID: \"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25\") " pod="openstack/barbican-9182-account-create-update-w78zx" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.404066 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-operator-scripts\") pod \"barbican-db-create-6fzvz\" (UID: \"b08a0a0a-78db-4b23-b4bd-15c14d70c14a\") " pod="openstack/barbican-db-create-6fzvz" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.404149 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hmrz\" (UniqueName: \"kubernetes.io/projected/fa9b2abe-27f2-42c1-b085-c58641532b1a-kube-api-access-8hmrz\") pod \"keystone-db-sync-l5x88\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.404193 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-operator-scripts\") pod \"barbican-9182-account-create-update-w78zx\" (UID: \"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25\") " pod="openstack/barbican-9182-account-create-update-w78zx" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.404240 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-combined-ca-bundle\") pod \"keystone-db-sync-l5x88\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.404398 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-config-data\") pod \"keystone-db-sync-l5x88\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.404433 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2thkz\" (UniqueName: \"kubernetes.io/projected/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-kube-api-access-2thkz\") pod \"barbican-db-create-6fzvz\" (UID: \"b08a0a0a-78db-4b23-b4bd-15c14d70c14a\") " pod="openstack/barbican-db-create-6fzvz" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.407649 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-operator-scripts\") pod \"barbican-db-create-6fzvz\" (UID: \"b08a0a0a-78db-4b23-b4bd-15c14d70c14a\") " pod="openstack/barbican-db-create-6fzvz" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.407875 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-operator-scripts\") pod \"barbican-9182-account-create-update-w78zx\" (UID: \"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25\") " pod="openstack/barbican-9182-account-create-update-w78zx" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.428818 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcms2\" (UniqueName: \"kubernetes.io/projected/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-kube-api-access-bcms2\") pod \"barbican-9182-account-create-update-w78zx\" (UID: \"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25\") " pod="openstack/barbican-9182-account-create-update-w78zx" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.429127 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6tszf"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.430373 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6tszf" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.456429 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6tszf"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.460324 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2thkz\" (UniqueName: \"kubernetes.io/projected/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-kube-api-access-2thkz\") pod \"barbican-db-create-6fzvz\" (UID: \"b08a0a0a-78db-4b23-b4bd-15c14d70c14a\") " pod="openstack/barbican-db-create-6fzvz" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.493213 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2188-account-create-update-r8nqh"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.494321 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2188-account-create-update-r8nqh" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.497445 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03"} Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.497509 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7"} Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.497521 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed"} Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.499025 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.507964 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-combined-ca-bundle\") pod \"keystone-db-sync-l5x88\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.509104 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-config-data\") pod \"keystone-db-sync-l5x88\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.509211 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1ad3d67-1a07-4021-ac14-1f7660deedb9-operator-scripts\") pod \"cinder-c4ab-account-create-update-gmflk\" (UID: \"e1ad3d67-1a07-4021-ac14-1f7660deedb9\") " pod="openstack/cinder-c4ab-account-create-update-gmflk" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.509617 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwlz\" (UniqueName: \"kubernetes.io/projected/e1ad3d67-1a07-4021-ac14-1f7660deedb9-kube-api-access-2rwlz\") pod \"cinder-c4ab-account-create-update-gmflk\" (UID: \"e1ad3d67-1a07-4021-ac14-1f7660deedb9\") " pod="openstack/cinder-c4ab-account-create-update-gmflk" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.509698 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hmrz\" (UniqueName: \"kubernetes.io/projected/fa9b2abe-27f2-42c1-b085-c58641532b1a-kube-api-access-8hmrz\") pod \"keystone-db-sync-l5x88\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.516149 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-combined-ca-bundle\") pod \"keystone-db-sync-l5x88\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.517006 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2188-account-create-update-r8nqh"] Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.517162 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-config-data\") pod \"keystone-db-sync-l5x88\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.541662 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9182-account-create-update-w78zx" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.550618 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hmrz\" (UniqueName: \"kubernetes.io/projected/fa9b2abe-27f2-42c1-b085-c58641532b1a-kube-api-access-8hmrz\") pod \"keystone-db-sync-l5x88\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.606331 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.612335 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1ad3d67-1a07-4021-ac14-1f7660deedb9-operator-scripts\") pod \"cinder-c4ab-account-create-update-gmflk\" (UID: \"e1ad3d67-1a07-4021-ac14-1f7660deedb9\") " pod="openstack/cinder-c4ab-account-create-update-gmflk" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.612580 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-operator-scripts\") pod \"neutron-db-create-6tszf\" (UID: \"65d4ede5-3c50-4cfe-a1aa-276ef430fe97\") " pod="openstack/neutron-db-create-6tszf" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.612665 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gtfl\" (UniqueName: \"kubernetes.io/projected/33440247-28b2-4dbb-97ba-868cda48348e-kube-api-access-9gtfl\") pod \"neutron-2188-account-create-update-r8nqh\" (UID: \"33440247-28b2-4dbb-97ba-868cda48348e\") " pod="openstack/neutron-2188-account-create-update-r8nqh" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.612750 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33440247-28b2-4dbb-97ba-868cda48348e-operator-scripts\") pod \"neutron-2188-account-create-update-r8nqh\" (UID: \"33440247-28b2-4dbb-97ba-868cda48348e\") " pod="openstack/neutron-2188-account-create-update-r8nqh" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.613259 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rwlz\" (UniqueName: \"kubernetes.io/projected/e1ad3d67-1a07-4021-ac14-1f7660deedb9-kube-api-access-2rwlz\") pod \"cinder-c4ab-account-create-update-gmflk\" (UID: \"e1ad3d67-1a07-4021-ac14-1f7660deedb9\") " pod="openstack/cinder-c4ab-account-create-update-gmflk" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.613347 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7624\" (UniqueName: \"kubernetes.io/projected/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-kube-api-access-c7624\") pod \"neutron-db-create-6tszf\" (UID: \"65d4ede5-3c50-4cfe-a1aa-276ef430fe97\") " pod="openstack/neutron-db-create-6tszf" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.614358 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1ad3d67-1a07-4021-ac14-1f7660deedb9-operator-scripts\") pod \"cinder-c4ab-account-create-update-gmflk\" (UID: \"e1ad3d67-1a07-4021-ac14-1f7660deedb9\") " pod="openstack/cinder-c4ab-account-create-update-gmflk" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.626199 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-92frq" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.636876 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rwlz\" (UniqueName: \"kubernetes.io/projected/e1ad3d67-1a07-4021-ac14-1f7660deedb9-kube-api-access-2rwlz\") pod \"cinder-c4ab-account-create-update-gmflk\" (UID: \"e1ad3d67-1a07-4021-ac14-1f7660deedb9\") " pod="openstack/cinder-c4ab-account-create-update-gmflk" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.715951 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-operator-scripts\") pod \"neutron-db-create-6tszf\" (UID: \"65d4ede5-3c50-4cfe-a1aa-276ef430fe97\") " pod="openstack/neutron-db-create-6tszf" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.716396 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gtfl\" (UniqueName: \"kubernetes.io/projected/33440247-28b2-4dbb-97ba-868cda48348e-kube-api-access-9gtfl\") pod \"neutron-2188-account-create-update-r8nqh\" (UID: \"33440247-28b2-4dbb-97ba-868cda48348e\") " pod="openstack/neutron-2188-account-create-update-r8nqh" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.716459 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33440247-28b2-4dbb-97ba-868cda48348e-operator-scripts\") pod \"neutron-2188-account-create-update-r8nqh\" (UID: \"33440247-28b2-4dbb-97ba-868cda48348e\") " pod="openstack/neutron-2188-account-create-update-r8nqh" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.716483 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7624\" (UniqueName: \"kubernetes.io/projected/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-kube-api-access-c7624\") pod \"neutron-db-create-6tszf\" (UID: \"65d4ede5-3c50-4cfe-a1aa-276ef430fe97\") " pod="openstack/neutron-db-create-6tszf" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.718668 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-operator-scripts\") pod \"neutron-db-create-6tszf\" (UID: \"65d4ede5-3c50-4cfe-a1aa-276ef430fe97\") " pod="openstack/neutron-db-create-6tszf" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.722278 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33440247-28b2-4dbb-97ba-868cda48348e-operator-scripts\") pod \"neutron-2188-account-create-update-r8nqh\" (UID: \"33440247-28b2-4dbb-97ba-868cda48348e\") " pod="openstack/neutron-2188-account-create-update-r8nqh" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.738326 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6fzvz" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.745658 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7624\" (UniqueName: \"kubernetes.io/projected/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-kube-api-access-c7624\") pod \"neutron-db-create-6tszf\" (UID: \"65d4ede5-3c50-4cfe-a1aa-276ef430fe97\") " pod="openstack/neutron-db-create-6tszf" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.746327 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gtfl\" (UniqueName: \"kubernetes.io/projected/33440247-28b2-4dbb-97ba-868cda48348e-kube-api-access-9gtfl\") pod \"neutron-2188-account-create-update-r8nqh\" (UID: \"33440247-28b2-4dbb-97ba-868cda48348e\") " pod="openstack/neutron-2188-account-create-update-r8nqh" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.766725 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l5x88" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.774558 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4ab-account-create-update-gmflk" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.794020 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6tszf" Jan 05 22:11:51 crc kubenswrapper[5034]: I0105 22:11:51.827403 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2188-account-create-update-r8nqh" Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.239281 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.310688 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9182-account-create-update-w78zx"] Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.368197 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-f5l6r"] Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.368577 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" podUID="47043bfb-044e-4b09-9c61-c97cf3b17a5e" containerName="dnsmasq-dns" containerID="cri-o://5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94" gracePeriod=10 Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.382793 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-92frq"] Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.549887 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-92frq" event={"ID":"f2741770-25b1-43ea-878d-f57b57e65fac","Type":"ContainerStarted","Data":"a24d0b4ff6217e1429e187e7af36eb61d7960d3497a4a74e2b431c567e1ff4e0"} Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.595679 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346"} Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.595735 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c"} Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.595744 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a"} Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.598265 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9182-account-create-update-w78zx" event={"ID":"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25","Type":"ContainerStarted","Data":"e06e6a647893a9c8935539bb3d72da7545605fec6752ab54be5729447383b850"} Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.683433 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6fzvz"] Jan 05 22:11:52 crc kubenswrapper[5034]: E0105 22:11:52.865933 5034 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.156:51798->38.102.83.156:40825: write tcp 38.102.83.156:51798->38.102.83.156:40825: write: broken pipe Jan 05 22:11:52 crc kubenswrapper[5034]: I0105 22:11:52.987871 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c4ab-account-create-update-gmflk"] Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.010163 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-l5x88"] Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.023375 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.141896 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2188-account-create-update-r8nqh"] Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.149480 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6tszf"] Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.159403 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-dns-svc\") pod \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.159526 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-config\") pod \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.159587 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d4lf\" (UniqueName: \"kubernetes.io/projected/47043bfb-044e-4b09-9c61-c97cf3b17a5e-kube-api-access-6d4lf\") pod \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.159672 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-sb\") pod \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.159756 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-nb\") pod \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\" (UID: \"47043bfb-044e-4b09-9c61-c97cf3b17a5e\") " Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.175040 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47043bfb-044e-4b09-9c61-c97cf3b17a5e-kube-api-access-6d4lf" (OuterVolumeSpecName: "kube-api-access-6d4lf") pod "47043bfb-044e-4b09-9c61-c97cf3b17a5e" (UID: "47043bfb-044e-4b09-9c61-c97cf3b17a5e"). InnerVolumeSpecName "kube-api-access-6d4lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.265614 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d4lf\" (UniqueName: \"kubernetes.io/projected/47043bfb-044e-4b09-9c61-c97cf3b17a5e-kube-api-access-6d4lf\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.339411 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47043bfb-044e-4b09-9c61-c97cf3b17a5e" (UID: "47043bfb-044e-4b09-9c61-c97cf3b17a5e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.367270 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-config" (OuterVolumeSpecName: "config") pod "47043bfb-044e-4b09-9c61-c97cf3b17a5e" (UID: "47043bfb-044e-4b09-9c61-c97cf3b17a5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.368797 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.368844 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.374195 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47043bfb-044e-4b09-9c61-c97cf3b17a5e" (UID: "47043bfb-044e-4b09-9c61-c97cf3b17a5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.397583 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47043bfb-044e-4b09-9c61-c97cf3b17a5e" (UID: "47043bfb-044e-4b09-9c61-c97cf3b17a5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.475327 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.475365 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47043bfb-044e-4b09-9c61-c97cf3b17a5e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:53 crc kubenswrapper[5034]: E0105 22:11:53.588988 5034 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2741770_25b1_43ea_878d_f57b57e65fac.slice/crio-conmon-7a0186b1e44ac8134f9fe51361b5dc44c3e8bf3da775b6eddb81ba01ec8b492b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2741770_25b1_43ea_878d_f57b57e65fac.slice/crio-7a0186b1e44ac8134f9fe51361b5dc44c3e8bf3da775b6eddb81ba01ec8b492b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef7e5dc1_c4d9_4481_a667_3cdf0a550f25.slice/crio-7830319424df571955a459862ab6f01845556f9ddae2f57038e7cc18d8f7d278.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb08a0a0a_78db_4b23_b4bd_15c14d70c14a.slice/crio-conmon-310342d4c4d8aee71861635fa5846ee0d54bd1dcaa6418240151b58f474c18f6.scope\": RecentStats: unable to find data in memory cache]" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.613577 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2188-account-create-update-r8nqh" event={"ID":"33440247-28b2-4dbb-97ba-868cda48348e","Type":"ContainerStarted","Data":"86cd8648ad710237e2c737321d8579ef123c2c5c6943eff3a278dc4f5216f2de"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.613630 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2188-account-create-update-r8nqh" event={"ID":"33440247-28b2-4dbb-97ba-868cda48348e","Type":"ContainerStarted","Data":"910714338f1fc533db5fcac18c2679be2306eed002fd0d471491212c0943f1b7"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.629224 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerStarted","Data":"9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.631250 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l5x88" event={"ID":"fa9b2abe-27f2-42c1-b085-c58641532b1a","Type":"ContainerStarted","Data":"83fc052e7613f3051f1ebf1e1f527d20c25fb1ca32a8e8e4b2e1046a7b44ec18"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.635667 5034 generic.go:334] "Generic (PLEG): container finished" podID="47043bfb-044e-4b09-9c61-c97cf3b17a5e" containerID="5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94" exitCode=0 Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.635938 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.636069 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" event={"ID":"47043bfb-044e-4b09-9c61-c97cf3b17a5e","Type":"ContainerDied","Data":"5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.636154 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-f5l6r" event={"ID":"47043bfb-044e-4b09-9c61-c97cf3b17a5e","Type":"ContainerDied","Data":"51de5bc9b8073083337cfdf2744a65d31ecd9273bb9ed1437483b8182bbecc56"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.636321 5034 scope.go:117] "RemoveContainer" containerID="5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.641949 5034 generic.go:334] "Generic (PLEG): container finished" podID="f2741770-25b1-43ea-878d-f57b57e65fac" containerID="7a0186b1e44ac8134f9fe51361b5dc44c3e8bf3da775b6eddb81ba01ec8b492b" exitCode=0 Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.642007 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-92frq" event={"ID":"f2741770-25b1-43ea-878d-f57b57e65fac","Type":"ContainerDied","Data":"7a0186b1e44ac8134f9fe51361b5dc44c3e8bf3da775b6eddb81ba01ec8b492b"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.643739 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-2188-account-create-update-r8nqh" podStartSLOduration=2.643715795 podStartE2EDuration="2.643715795s" podCreationTimestamp="2026-01-05 22:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:11:53.63507016 +0000 UTC m=+1206.007069599" watchObservedRunningTime="2026-01-05 22:11:53.643715795 +0000 UTC m=+1206.015715234" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.644609 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6tszf" event={"ID":"65d4ede5-3c50-4cfe-a1aa-276ef430fe97","Type":"ContainerStarted","Data":"6f134b9fa65defccabde82e84c6e4dc250a78dc9edb97b12d8ca3a8e9f5e1687"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.644640 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6tszf" event={"ID":"65d4ede5-3c50-4cfe-a1aa-276ef430fe97","Type":"ContainerStarted","Data":"5370e39febe200519f3acced441fe42c4c8eca29e83e9209ded17b9bfe6e922d"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.646703 5034 generic.go:334] "Generic (PLEG): container finished" podID="b08a0a0a-78db-4b23-b4bd-15c14d70c14a" containerID="310342d4c4d8aee71861635fa5846ee0d54bd1dcaa6418240151b58f474c18f6" exitCode=0 Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.646747 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6fzvz" event={"ID":"b08a0a0a-78db-4b23-b4bd-15c14d70c14a","Type":"ContainerDied","Data":"310342d4c4d8aee71861635fa5846ee0d54bd1dcaa6418240151b58f474c18f6"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.646762 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6fzvz" event={"ID":"b08a0a0a-78db-4b23-b4bd-15c14d70c14a","Type":"ContainerStarted","Data":"5f0bd80a29280f4646e3c7ca1ded7c7768a7eb3963808e84a15559463f048cd4"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.648160 5034 generic.go:334] "Generic (PLEG): container finished" podID="ef7e5dc1-c4d9-4481-a667-3cdf0a550f25" containerID="7830319424df571955a459862ab6f01845556f9ddae2f57038e7cc18d8f7d278" exitCode=0 Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.648226 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9182-account-create-update-w78zx" event={"ID":"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25","Type":"ContainerDied","Data":"7830319424df571955a459862ab6f01845556f9ddae2f57038e7cc18d8f7d278"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.649638 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4ab-account-create-update-gmflk" event={"ID":"e1ad3d67-1a07-4021-ac14-1f7660deedb9","Type":"ContainerStarted","Data":"823af9d38e6968a71bc565ca14a779898c0c232faee726e3066749a4a2b5963c"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.649668 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4ab-account-create-update-gmflk" event={"ID":"e1ad3d67-1a07-4021-ac14-1f7660deedb9","Type":"ContainerStarted","Data":"32bf81ba436f15ee64e9b026797a9cbe6100f7bc32e3dc4c40bdbce89997dba4"} Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.694382 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.112896303 podStartE2EDuration="47.69436482s" podCreationTimestamp="2026-01-05 22:11:06 +0000 UTC" firstStartedPulling="2026-01-05 22:11:39.89803692 +0000 UTC m=+1192.270036359" lastFinishedPulling="2026-01-05 22:11:50.479505437 +0000 UTC m=+1202.851504876" observedRunningTime="2026-01-05 22:11:53.675882556 +0000 UTC m=+1206.047881985" watchObservedRunningTime="2026-01-05 22:11:53.69436482 +0000 UTC m=+1206.066364259" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.732388 5034 scope.go:117] "RemoveContainer" containerID="76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.755987 5034 scope.go:117] "RemoveContainer" containerID="5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94" Jan 05 22:11:53 crc kubenswrapper[5034]: E0105 22:11:53.759768 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94\": container with ID starting with 5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94 not found: ID does not exist" containerID="5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.759822 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94"} err="failed to get container status \"5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94\": rpc error: code = NotFound desc = could not find container \"5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94\": container with ID starting with 5f92dcd1ccc684b6d80a53775a6d944a75a7e5a39a084dca6723c73d8c19ac94 not found: ID does not exist" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.759855 5034 scope.go:117] "RemoveContainer" containerID="76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8" Jan 05 22:11:53 crc kubenswrapper[5034]: E0105 22:11:53.762172 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8\": container with ID starting with 76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8 not found: ID does not exist" containerID="76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.762201 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8"} err="failed to get container status \"76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8\": rpc error: code = NotFound desc = could not find container \"76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8\": container with ID starting with 76a1456a6889c394b07c58fbc5bc2d76bf45f7bcc40aec4098d0b339b2a116c8 not found: ID does not exist" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.789785 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-6tszf" podStartSLOduration=2.789763612 podStartE2EDuration="2.789763612s" podCreationTimestamp="2026-01-05 22:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:11:53.780479799 +0000 UTC m=+1206.152479228" watchObservedRunningTime="2026-01-05 22:11:53.789763612 +0000 UTC m=+1206.161763051" Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.849164 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-f5l6r"] Jan 05 22:11:53 crc kubenswrapper[5034]: I0105 22:11:53.854415 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-f5l6r"] Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.016915 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-fqv8q"] Jan 05 22:11:54 crc kubenswrapper[5034]: E0105 22:11:54.017439 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47043bfb-044e-4b09-9c61-c97cf3b17a5e" containerName="dnsmasq-dns" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.017457 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="47043bfb-044e-4b09-9c61-c97cf3b17a5e" containerName="dnsmasq-dns" Jan 05 22:11:54 crc kubenswrapper[5034]: E0105 22:11:54.017488 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47043bfb-044e-4b09-9c61-c97cf3b17a5e" containerName="init" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.017496 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="47043bfb-044e-4b09-9c61-c97cf3b17a5e" containerName="init" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.017672 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="47043bfb-044e-4b09-9c61-c97cf3b17a5e" containerName="dnsmasq-dns" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.018687 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.021143 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.032661 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-fqv8q"] Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.110688 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.110901 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.110993 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.111040 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-config\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.111238 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b99s2\" (UniqueName: \"kubernetes.io/projected/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-kube-api-access-b99s2\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.111325 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.212969 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b99s2\" (UniqueName: \"kubernetes.io/projected/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-kube-api-access-b99s2\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.213045 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.213113 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.213167 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.213196 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.213220 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-config\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.214134 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-config\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.214141 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.214207 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.214860 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.214946 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.246688 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b99s2\" (UniqueName: \"kubernetes.io/projected/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-kube-api-access-b99s2\") pod \"dnsmasq-dns-56c9bc6f5c-fqv8q\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.335137 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.663946 5034 generic.go:334] "Generic (PLEG): container finished" podID="65d4ede5-3c50-4cfe-a1aa-276ef430fe97" containerID="6f134b9fa65defccabde82e84c6e4dc250a78dc9edb97b12d8ca3a8e9f5e1687" exitCode=0 Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.664433 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6tszf" event={"ID":"65d4ede5-3c50-4cfe-a1aa-276ef430fe97","Type":"ContainerDied","Data":"6f134b9fa65defccabde82e84c6e4dc250a78dc9edb97b12d8ca3a8e9f5e1687"} Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.667269 5034 generic.go:334] "Generic (PLEG): container finished" podID="33440247-28b2-4dbb-97ba-868cda48348e" containerID="86cd8648ad710237e2c737321d8579ef123c2c5c6943eff3a278dc4f5216f2de" exitCode=0 Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.667337 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2188-account-create-update-r8nqh" event={"ID":"33440247-28b2-4dbb-97ba-868cda48348e","Type":"ContainerDied","Data":"86cd8648ad710237e2c737321d8579ef123c2c5c6943eff3a278dc4f5216f2de"} Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.693513 5034 generic.go:334] "Generic (PLEG): container finished" podID="e1ad3d67-1a07-4021-ac14-1f7660deedb9" containerID="823af9d38e6968a71bc565ca14a779898c0c232faee726e3066749a4a2b5963c" exitCode=0 Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.693724 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4ab-account-create-update-gmflk" event={"ID":"e1ad3d67-1a07-4021-ac14-1f7660deedb9","Type":"ContainerDied","Data":"823af9d38e6968a71bc565ca14a779898c0c232faee726e3066749a4a2b5963c"} Jan 05 22:11:54 crc kubenswrapper[5034]: I0105 22:11:54.873788 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-fqv8q"] Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.178306 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6fzvz" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.235877 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2thkz\" (UniqueName: \"kubernetes.io/projected/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-kube-api-access-2thkz\") pod \"b08a0a0a-78db-4b23-b4bd-15c14d70c14a\" (UID: \"b08a0a0a-78db-4b23-b4bd-15c14d70c14a\") " Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.236092 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-operator-scripts\") pod \"b08a0a0a-78db-4b23-b4bd-15c14d70c14a\" (UID: \"b08a0a0a-78db-4b23-b4bd-15c14d70c14a\") " Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.238815 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b08a0a0a-78db-4b23-b4bd-15c14d70c14a" (UID: "b08a0a0a-78db-4b23-b4bd-15c14d70c14a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.243570 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-kube-api-access-2thkz" (OuterVolumeSpecName: "kube-api-access-2thkz") pod "b08a0a0a-78db-4b23-b4bd-15c14d70c14a" (UID: "b08a0a0a-78db-4b23-b4bd-15c14d70c14a"). InnerVolumeSpecName "kube-api-access-2thkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.311103 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4ab-account-create-update-gmflk" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.314059 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9182-account-create-update-w78zx" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.322999 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-92frq" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.338495 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.338528 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2thkz\" (UniqueName: \"kubernetes.io/projected/b08a0a0a-78db-4b23-b4bd-15c14d70c14a-kube-api-access-2thkz\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.439758 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcms2\" (UniqueName: \"kubernetes.io/projected/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-kube-api-access-bcms2\") pod \"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25\" (UID: \"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25\") " Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.440325 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rwlz\" (UniqueName: \"kubernetes.io/projected/e1ad3d67-1a07-4021-ac14-1f7660deedb9-kube-api-access-2rwlz\") pod \"e1ad3d67-1a07-4021-ac14-1f7660deedb9\" (UID: \"e1ad3d67-1a07-4021-ac14-1f7660deedb9\") " Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.440363 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2741770-25b1-43ea-878d-f57b57e65fac-operator-scripts\") pod \"f2741770-25b1-43ea-878d-f57b57e65fac\" (UID: \"f2741770-25b1-43ea-878d-f57b57e65fac\") " Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.440460 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-operator-scripts\") pod \"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25\" (UID: \"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25\") " Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.440514 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1ad3d67-1a07-4021-ac14-1f7660deedb9-operator-scripts\") pod \"e1ad3d67-1a07-4021-ac14-1f7660deedb9\" (UID: \"e1ad3d67-1a07-4021-ac14-1f7660deedb9\") " Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.440546 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwwv2\" (UniqueName: \"kubernetes.io/projected/f2741770-25b1-43ea-878d-f57b57e65fac-kube-api-access-kwwv2\") pod \"f2741770-25b1-43ea-878d-f57b57e65fac\" (UID: \"f2741770-25b1-43ea-878d-f57b57e65fac\") " Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.441141 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef7e5dc1-c4d9-4481-a667-3cdf0a550f25" (UID: "ef7e5dc1-c4d9-4481-a667-3cdf0a550f25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.441233 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1ad3d67-1a07-4021-ac14-1f7660deedb9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1ad3d67-1a07-4021-ac14-1f7660deedb9" (UID: "e1ad3d67-1a07-4021-ac14-1f7660deedb9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.441277 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2741770-25b1-43ea-878d-f57b57e65fac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2741770-25b1-43ea-878d-f57b57e65fac" (UID: "f2741770-25b1-43ea-878d-f57b57e65fac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.442120 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2741770-25b1-43ea-878d-f57b57e65fac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.442148 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.442160 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1ad3d67-1a07-4021-ac14-1f7660deedb9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.444357 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-kube-api-access-bcms2" (OuterVolumeSpecName: "kube-api-access-bcms2") pod "ef7e5dc1-c4d9-4481-a667-3cdf0a550f25" (UID: "ef7e5dc1-c4d9-4481-a667-3cdf0a550f25"). InnerVolumeSpecName "kube-api-access-bcms2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.446171 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2741770-25b1-43ea-878d-f57b57e65fac-kube-api-access-kwwv2" (OuterVolumeSpecName: "kube-api-access-kwwv2") pod "f2741770-25b1-43ea-878d-f57b57e65fac" (UID: "f2741770-25b1-43ea-878d-f57b57e65fac"). InnerVolumeSpecName "kube-api-access-kwwv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.447356 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ad3d67-1a07-4021-ac14-1f7660deedb9-kube-api-access-2rwlz" (OuterVolumeSpecName: "kube-api-access-2rwlz") pod "e1ad3d67-1a07-4021-ac14-1f7660deedb9" (UID: "e1ad3d67-1a07-4021-ac14-1f7660deedb9"). InnerVolumeSpecName "kube-api-access-2rwlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.543363 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwwv2\" (UniqueName: \"kubernetes.io/projected/f2741770-25b1-43ea-878d-f57b57e65fac-kube-api-access-kwwv2\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.543395 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcms2\" (UniqueName: \"kubernetes.io/projected/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25-kube-api-access-bcms2\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.543404 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rwlz\" (UniqueName: \"kubernetes.io/projected/e1ad3d67-1a07-4021-ac14-1f7660deedb9-kube-api-access-2rwlz\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.710809 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6fzvz" event={"ID":"b08a0a0a-78db-4b23-b4bd-15c14d70c14a","Type":"ContainerDied","Data":"5f0bd80a29280f4646e3c7ca1ded7c7768a7eb3963808e84a15559463f048cd4"} Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.710875 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f0bd80a29280f4646e3c7ca1ded7c7768a7eb3963808e84a15559463f048cd4" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.710832 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6fzvz" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.715427 5034 generic.go:334] "Generic (PLEG): container finished" podID="6298c5c7-649d-4f0f-bd26-b4b84f37d53d" containerID="04c9b8a51d04b9d83a8455cb5e2ae087c20cce6dc088e72a9aab30926faf0d15" exitCode=0 Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.715537 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" event={"ID":"6298c5c7-649d-4f0f-bd26-b4b84f37d53d","Type":"ContainerDied","Data":"04c9b8a51d04b9d83a8455cb5e2ae087c20cce6dc088e72a9aab30926faf0d15"} Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.715577 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" event={"ID":"6298c5c7-649d-4f0f-bd26-b4b84f37d53d","Type":"ContainerStarted","Data":"8b833a1eccf67d211c6c947ac2491251a1b9f12a622b0b2974f52f18826fc8a0"} Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.720449 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9182-account-create-update-w78zx" event={"ID":"ef7e5dc1-c4d9-4481-a667-3cdf0a550f25","Type":"ContainerDied","Data":"e06e6a647893a9c8935539bb3d72da7545605fec6752ab54be5729447383b850"} Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.720523 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e06e6a647893a9c8935539bb3d72da7545605fec6752ab54be5729447383b850" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.720469 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9182-account-create-update-w78zx" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.723134 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4ab-account-create-update-gmflk" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.723130 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4ab-account-create-update-gmflk" event={"ID":"e1ad3d67-1a07-4021-ac14-1f7660deedb9","Type":"ContainerDied","Data":"32bf81ba436f15ee64e9b026797a9cbe6100f7bc32e3dc4c40bdbce89997dba4"} Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.723211 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32bf81ba436f15ee64e9b026797a9cbe6100f7bc32e3dc4c40bdbce89997dba4" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.724599 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-92frq" event={"ID":"f2741770-25b1-43ea-878d-f57b57e65fac","Type":"ContainerDied","Data":"a24d0b4ff6217e1429e187e7af36eb61d7960d3497a4a74e2b431c567e1ff4e0"} Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.724676 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24d0b4ff6217e1429e187e7af36eb61d7960d3497a4a74e2b431c567e1ff4e0" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.724868 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-92frq" Jan 05 22:11:55 crc kubenswrapper[5034]: I0105 22:11:55.860584 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47043bfb-044e-4b09-9c61-c97cf3b17a5e" path="/var/lib/kubelet/pods/47043bfb-044e-4b09-9c61-c97cf3b17a5e/volumes" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.492834 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2188-account-create-update-r8nqh" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.572302 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6tszf" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.605722 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33440247-28b2-4dbb-97ba-868cda48348e-operator-scripts\") pod \"33440247-28b2-4dbb-97ba-868cda48348e\" (UID: \"33440247-28b2-4dbb-97ba-868cda48348e\") " Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.605807 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gtfl\" (UniqueName: \"kubernetes.io/projected/33440247-28b2-4dbb-97ba-868cda48348e-kube-api-access-9gtfl\") pod \"33440247-28b2-4dbb-97ba-868cda48348e\" (UID: \"33440247-28b2-4dbb-97ba-868cda48348e\") " Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.608430 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33440247-28b2-4dbb-97ba-868cda48348e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33440247-28b2-4dbb-97ba-868cda48348e" (UID: "33440247-28b2-4dbb-97ba-868cda48348e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.615958 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33440247-28b2-4dbb-97ba-868cda48348e-kube-api-access-9gtfl" (OuterVolumeSpecName: "kube-api-access-9gtfl") pod "33440247-28b2-4dbb-97ba-868cda48348e" (UID: "33440247-28b2-4dbb-97ba-868cda48348e"). InnerVolumeSpecName "kube-api-access-9gtfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.707970 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7624\" (UniqueName: \"kubernetes.io/projected/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-kube-api-access-c7624\") pod \"65d4ede5-3c50-4cfe-a1aa-276ef430fe97\" (UID: \"65d4ede5-3c50-4cfe-a1aa-276ef430fe97\") " Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.708042 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-operator-scripts\") pod \"65d4ede5-3c50-4cfe-a1aa-276ef430fe97\" (UID: \"65d4ede5-3c50-4cfe-a1aa-276ef430fe97\") " Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.708443 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33440247-28b2-4dbb-97ba-868cda48348e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.708463 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gtfl\" (UniqueName: \"kubernetes.io/projected/33440247-28b2-4dbb-97ba-868cda48348e-kube-api-access-9gtfl\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.708603 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65d4ede5-3c50-4cfe-a1aa-276ef430fe97" (UID: "65d4ede5-3c50-4cfe-a1aa-276ef430fe97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.711978 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-kube-api-access-c7624" (OuterVolumeSpecName: "kube-api-access-c7624") pod "65d4ede5-3c50-4cfe-a1aa-276ef430fe97" (UID: "65d4ede5-3c50-4cfe-a1aa-276ef430fe97"). InnerVolumeSpecName "kube-api-access-c7624". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.755142 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" event={"ID":"6298c5c7-649d-4f0f-bd26-b4b84f37d53d","Type":"ContainerStarted","Data":"0a346af9b96064ea27c7014575e6912ab244dbf1361e54725182abb31ee39e46"} Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.756560 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.759469 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l5x88" event={"ID":"fa9b2abe-27f2-42c1-b085-c58641532b1a","Type":"ContainerStarted","Data":"7fb9caeb268b5b9cc7ce222b9523d083a7e5465a538b694b7fb7458fdc86ec5d"} Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.762129 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6tszf" event={"ID":"65d4ede5-3c50-4cfe-a1aa-276ef430fe97","Type":"ContainerDied","Data":"5370e39febe200519f3acced441fe42c4c8eca29e83e9209ded17b9bfe6e922d"} Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.762167 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5370e39febe200519f3acced441fe42c4c8eca29e83e9209ded17b9bfe6e922d" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.762225 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6tszf" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.770818 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2188-account-create-update-r8nqh" event={"ID":"33440247-28b2-4dbb-97ba-868cda48348e","Type":"ContainerDied","Data":"910714338f1fc533db5fcac18c2679be2306eed002fd0d471491212c0943f1b7"} Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.770873 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="910714338f1fc533db5fcac18c2679be2306eed002fd0d471491212c0943f1b7" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.770874 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2188-account-create-update-r8nqh" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.781956 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" podStartSLOduration=5.781931387 podStartE2EDuration="5.781931387s" podCreationTimestamp="2026-01-05 22:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:11:58.777977535 +0000 UTC m=+1211.149976974" watchObservedRunningTime="2026-01-05 22:11:58.781931387 +0000 UTC m=+1211.153930826" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.810026 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7624\" (UniqueName: \"kubernetes.io/projected/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-kube-api-access-c7624\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.810064 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65d4ede5-3c50-4cfe-a1aa-276ef430fe97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:58 crc kubenswrapper[5034]: I0105 22:11:58.824148 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-l5x88" podStartSLOduration=2.49956803 podStartE2EDuration="7.824125142s" podCreationTimestamp="2026-01-05 22:11:51 +0000 UTC" firstStartedPulling="2026-01-05 22:11:52.960726878 +0000 UTC m=+1205.332726317" lastFinishedPulling="2026-01-05 22:11:58.28528396 +0000 UTC m=+1210.657283429" observedRunningTime="2026-01-05 22:11:58.81558013 +0000 UTC m=+1211.187579569" watchObservedRunningTime="2026-01-05 22:11:58.824125142 +0000 UTC m=+1211.196124581" Jan 05 22:12:01 crc kubenswrapper[5034]: I0105 22:12:01.797302 5034 generic.go:334] "Generic (PLEG): container finished" podID="fa9b2abe-27f2-42c1-b085-c58641532b1a" containerID="7fb9caeb268b5b9cc7ce222b9523d083a7e5465a538b694b7fb7458fdc86ec5d" exitCode=0 Jan 05 22:12:01 crc kubenswrapper[5034]: I0105 22:12:01.797458 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l5x88" event={"ID":"fa9b2abe-27f2-42c1-b085-c58641532b1a","Type":"ContainerDied","Data":"7fb9caeb268b5b9cc7ce222b9523d083a7e5465a538b694b7fb7458fdc86ec5d"} Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.201332 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l5x88" Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.310610 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-config-data\") pod \"fa9b2abe-27f2-42c1-b085-c58641532b1a\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.310702 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-combined-ca-bundle\") pod \"fa9b2abe-27f2-42c1-b085-c58641532b1a\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.310786 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hmrz\" (UniqueName: \"kubernetes.io/projected/fa9b2abe-27f2-42c1-b085-c58641532b1a-kube-api-access-8hmrz\") pod \"fa9b2abe-27f2-42c1-b085-c58641532b1a\" (UID: \"fa9b2abe-27f2-42c1-b085-c58641532b1a\") " Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.332364 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9b2abe-27f2-42c1-b085-c58641532b1a-kube-api-access-8hmrz" (OuterVolumeSpecName: "kube-api-access-8hmrz") pod "fa9b2abe-27f2-42c1-b085-c58641532b1a" (UID: "fa9b2abe-27f2-42c1-b085-c58641532b1a"). InnerVolumeSpecName "kube-api-access-8hmrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.338735 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa9b2abe-27f2-42c1-b085-c58641532b1a" (UID: "fa9b2abe-27f2-42c1-b085-c58641532b1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.359060 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-config-data" (OuterVolumeSpecName: "config-data") pod "fa9b2abe-27f2-42c1-b085-c58641532b1a" (UID: "fa9b2abe-27f2-42c1-b085-c58641532b1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.412919 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.412970 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9b2abe-27f2-42c1-b085-c58641532b1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.412987 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hmrz\" (UniqueName: \"kubernetes.io/projected/fa9b2abe-27f2-42c1-b085-c58641532b1a-kube-api-access-8hmrz\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.815072 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l5x88" event={"ID":"fa9b2abe-27f2-42c1-b085-c58641532b1a","Type":"ContainerDied","Data":"83fc052e7613f3051f1ebf1e1f527d20c25fb1ca32a8e8e4b2e1046a7b44ec18"} Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.815518 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83fc052e7613f3051f1ebf1e1f527d20c25fb1ca32a8e8e4b2e1046a7b44ec18" Jan 05 22:12:03 crc kubenswrapper[5034]: I0105 22:12:03.815141 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l5x88" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.103030 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-fqv8q"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.106410 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" podUID="6298c5c7-649d-4f0f-bd26-b4b84f37d53d" containerName="dnsmasq-dns" containerID="cri-o://0a346af9b96064ea27c7014575e6912ab244dbf1361e54725182abb31ee39e46" gracePeriod=10 Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.114736 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.139417 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hmszp"] Jan 05 22:12:04 crc kubenswrapper[5034]: E0105 22:12:04.139801 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33440247-28b2-4dbb-97ba-868cda48348e" containerName="mariadb-account-create-update" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.139819 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="33440247-28b2-4dbb-97ba-868cda48348e" containerName="mariadb-account-create-update" Jan 05 22:12:04 crc kubenswrapper[5034]: E0105 22:12:04.139832 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7e5dc1-c4d9-4481-a667-3cdf0a550f25" containerName="mariadb-account-create-update" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.139839 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7e5dc1-c4d9-4481-a667-3cdf0a550f25" containerName="mariadb-account-create-update" Jan 05 22:12:04 crc kubenswrapper[5034]: E0105 22:12:04.139856 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2741770-25b1-43ea-878d-f57b57e65fac" containerName="mariadb-database-create" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.139863 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2741770-25b1-43ea-878d-f57b57e65fac" containerName="mariadb-database-create" Jan 05 22:12:04 crc kubenswrapper[5034]: E0105 22:12:04.139874 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d4ede5-3c50-4cfe-a1aa-276ef430fe97" containerName="mariadb-database-create" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.139880 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d4ede5-3c50-4cfe-a1aa-276ef430fe97" containerName="mariadb-database-create" Jan 05 22:12:04 crc kubenswrapper[5034]: E0105 22:12:04.139896 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08a0a0a-78db-4b23-b4bd-15c14d70c14a" containerName="mariadb-database-create" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.139905 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08a0a0a-78db-4b23-b4bd-15c14d70c14a" containerName="mariadb-database-create" Jan 05 22:12:04 crc kubenswrapper[5034]: E0105 22:12:04.139918 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9b2abe-27f2-42c1-b085-c58641532b1a" containerName="keystone-db-sync" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.139926 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9b2abe-27f2-42c1-b085-c58641532b1a" containerName="keystone-db-sync" Jan 05 22:12:04 crc kubenswrapper[5034]: E0105 22:12:04.139939 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ad3d67-1a07-4021-ac14-1f7660deedb9" containerName="mariadb-account-create-update" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.139945 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ad3d67-1a07-4021-ac14-1f7660deedb9" containerName="mariadb-account-create-update" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.140157 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="33440247-28b2-4dbb-97ba-868cda48348e" containerName="mariadb-account-create-update" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.140182 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2741770-25b1-43ea-878d-f57b57e65fac" containerName="mariadb-database-create" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.140192 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7e5dc1-c4d9-4481-a667-3cdf0a550f25" containerName="mariadb-account-create-update" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.140210 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08a0a0a-78db-4b23-b4bd-15c14d70c14a" containerName="mariadb-database-create" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.140222 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ad3d67-1a07-4021-ac14-1f7660deedb9" containerName="mariadb-account-create-update" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.140237 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9b2abe-27f2-42c1-b085-c58641532b1a" containerName="keystone-db-sync" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.140253 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d4ede5-3c50-4cfe-a1aa-276ef430fe97" containerName="mariadb-database-create" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.140909 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.150628 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.157647 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-62wxh" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.158023 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.162842 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.163149 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.181338 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hmszp"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.237764 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-scripts\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.237818 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-config-data\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.237874 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-credential-keys\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.237973 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwlxj\" (UniqueName: \"kubernetes.io/projected/fea94cbd-19d3-4e67-8104-2de9763e5035-kube-api-access-kwlxj\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.238002 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-combined-ca-bundle\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.238024 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-fernet-keys\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.272745 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xpnzz"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.274389 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.293738 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xpnzz"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.336946 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" podUID="6298c5c7-649d-4f0f-bd26-b4b84f37d53d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340639 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340692 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwlxj\" (UniqueName: \"kubernetes.io/projected/fea94cbd-19d3-4e67-8104-2de9763e5035-kube-api-access-kwlxj\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340731 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-combined-ca-bundle\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340753 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-fernet-keys\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340775 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-config\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340811 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340834 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-scripts\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340853 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-config-data\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340882 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45w4h\" (UniqueName: \"kubernetes.io/projected/7a9f3bbc-2012-403e-8772-1115579feebf-kube-api-access-45w4h\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340903 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340924 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.340961 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-credential-keys\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.358303 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-credential-keys\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.362102 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-combined-ca-bundle\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.368939 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-scripts\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.369009 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-fernet-keys\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.372185 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-config-data\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.413854 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwlxj\" (UniqueName: \"kubernetes.io/projected/fea94cbd-19d3-4e67-8104-2de9763e5035-kube-api-access-kwlxj\") pod \"keystone-bootstrap-hmszp\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.444392 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.444460 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45w4h\" (UniqueName: \"kubernetes.io/projected/7a9f3bbc-2012-403e-8772-1115579feebf-kube-api-access-45w4h\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.444483 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.444502 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.444587 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.444627 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-config\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.445636 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-config\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.448539 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.454359 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.456316 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.456997 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.468493 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.546588 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-sh52t"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.546978 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45w4h\" (UniqueName: \"kubernetes.io/projected/7a9f3bbc-2012-403e-8772-1115579feebf-kube-api-access-45w4h\") pod \"dnsmasq-dns-54b4bb76d5-xpnzz\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.548161 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.562849 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.575614 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tdccx" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.575797 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.593206 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-dkbzg"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.605884 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.614984 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.615344 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-77mmv" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.620186 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.638689 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.644825 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-sh52t"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.656506 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm29b\" (UniqueName: \"kubernetes.io/projected/32ab947b-542c-4bd4-a4e9-493332d7caf5-kube-api-access-jm29b\") pod \"neutron-db-sync-dkbzg\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.656586 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-config-data\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.660436 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-combined-ca-bundle\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.660534 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93443f38-a401-43ed-8ba6-7e0ebef66eb5-etc-machine-id\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.660557 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-db-sync-config-data\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.660655 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-combined-ca-bundle\") pod \"neutron-db-sync-dkbzg\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.660687 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4kdt\" (UniqueName: \"kubernetes.io/projected/93443f38-a401-43ed-8ba6-7e0ebef66eb5-kube-api-access-v4kdt\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.660744 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-scripts\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.660853 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-config\") pod \"neutron-db-sync-dkbzg\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.717234 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dkbzg"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.762787 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm29b\" (UniqueName: \"kubernetes.io/projected/32ab947b-542c-4bd4-a4e9-493332d7caf5-kube-api-access-jm29b\") pod \"neutron-db-sync-dkbzg\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.762861 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-config-data\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.762906 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-combined-ca-bundle\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.762945 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93443f38-a401-43ed-8ba6-7e0ebef66eb5-etc-machine-id\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.762973 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-db-sync-config-data\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.763009 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-combined-ca-bundle\") pod \"neutron-db-sync-dkbzg\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.763032 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4kdt\" (UniqueName: \"kubernetes.io/projected/93443f38-a401-43ed-8ba6-7e0ebef66eb5-kube-api-access-v4kdt\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.763060 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-scripts\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.763104 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-config\") pod \"neutron-db-sync-dkbzg\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.767633 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-config\") pod \"neutron-db-sync-dkbzg\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.768790 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.786107 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.787975 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-config-data\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.788625 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.788819 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93443f38-a401-43ed-8ba6-7e0ebef66eb5-etc-machine-id\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.796740 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.797044 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.802548 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-combined-ca-bundle\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.830833 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-combined-ca-bundle\") pod \"neutron-db-sync-dkbzg\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.831006 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-scripts\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.837847 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4kdt\" (UniqueName: \"kubernetes.io/projected/93443f38-a401-43ed-8ba6-7e0ebef66eb5-kube-api-access-v4kdt\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.837980 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-db-sync-config-data\") pod \"cinder-db-sync-sh52t\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.866653 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhp49\" (UniqueName: \"kubernetes.io/projected/ac333e19-4263-460f-8fe5-d950677ef64f-kube-api-access-zhp49\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.866707 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.866733 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.866826 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-run-httpd\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.866860 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-config-data\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.866883 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-log-httpd\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.866913 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-scripts\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.867860 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm29b\" (UniqueName: \"kubernetes.io/projected/32ab947b-542c-4bd4-a4e9-493332d7caf5-kube-api-access-jm29b\") pod \"neutron-db-sync-dkbzg\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.870069 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-htrf9"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.894230 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.929056 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.934580 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8nx2b" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.937528 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.945981 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.952721 5034 generic.go:334] "Generic (PLEG): container finished" podID="6298c5c7-649d-4f0f-bd26-b4b84f37d53d" containerID="0a346af9b96064ea27c7014575e6912ab244dbf1361e54725182abb31ee39e46" exitCode=0 Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.952851 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" event={"ID":"6298c5c7-649d-4f0f-bd26-b4b84f37d53d","Type":"ContainerDied","Data":"0a346af9b96064ea27c7014575e6912ab244dbf1361e54725182abb31ee39e46"} Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.970261 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-config-data\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.970513 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nsm\" (UniqueName: \"kubernetes.io/projected/7ad742f5-9855-40e9-953f-fc2cf3baee89-kube-api-access-d4nsm\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.970620 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-run-httpd\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.970693 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-config-data\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.970759 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-log-httpd\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.970873 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-scripts\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.970945 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-combined-ca-bundle\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.971010 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-scripts\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.971157 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhp49\" (UniqueName: \"kubernetes.io/projected/ac333e19-4263-460f-8fe5-d950677ef64f-kube-api-access-zhp49\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.971227 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.971315 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.971395 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad742f5-9855-40e9-953f-fc2cf3baee89-logs\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.972539 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-64l6z"] Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.972753 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-run-httpd\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.974050 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.979574 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-log-httpd\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.984155 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.989309 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.990665 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-scripts\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:04 crc kubenswrapper[5034]: I0105 22:12:04.993744 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.000539 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xdttp" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.003366 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-64l6z"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.006206 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.021262 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-htrf9"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.022943 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-config-data\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.038635 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhp49\" (UniqueName: \"kubernetes.io/projected/ac333e19-4263-460f-8fe5-d950677ef64f-kube-api-access-zhp49\") pod \"ceilometer-0\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " pod="openstack/ceilometer-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.047374 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xpnzz"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.078090 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldnv\" (UniqueName: \"kubernetes.io/projected/e91a5139-5537-4578-ab4f-67d52927afa9-kube-api-access-tldnv\") pod \"barbican-db-sync-64l6z\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.078149 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-db-sync-config-data\") pod \"barbican-db-sync-64l6z\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.078192 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad742f5-9855-40e9-953f-fc2cf3baee89-logs\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.078247 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-config-data\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.078272 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4nsm\" (UniqueName: \"kubernetes.io/projected/7ad742f5-9855-40e9-953f-fc2cf3baee89-kube-api-access-d4nsm\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.078415 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-combined-ca-bundle\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.078435 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-scripts\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.078523 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-combined-ca-bundle\") pod \"barbican-db-sync-64l6z\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.079604 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad742f5-9855-40e9-953f-fc2cf3baee89-logs\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.092722 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-8jvdk"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.093788 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-config-data\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.110270 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4nsm\" (UniqueName: \"kubernetes.io/projected/7ad742f5-9855-40e9-953f-fc2cf3baee89-kube-api-access-d4nsm\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.110446 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-scripts\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.123284 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-combined-ca-bundle\") pod \"placement-db-sync-htrf9\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.137424 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.172304 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-8jvdk"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.185308 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.185405 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.185471 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.185535 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.185592 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-combined-ca-bundle\") pod \"barbican-db-sync-64l6z\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.185626 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hx7\" (UniqueName: \"kubernetes.io/projected/193104db-05bf-409d-88e5-17753e72f1b0-kube-api-access-v5hx7\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.185655 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-config\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.185713 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tldnv\" (UniqueName: \"kubernetes.io/projected/e91a5139-5537-4578-ab4f-67d52927afa9-kube-api-access-tldnv\") pod \"barbican-db-sync-64l6z\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.185759 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-db-sync-config-data\") pod \"barbican-db-sync-64l6z\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.220566 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-combined-ca-bundle\") pod \"barbican-db-sync-64l6z\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.231029 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.241527 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-db-sync-config-data\") pod \"barbican-db-sync-64l6z\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.244931 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tldnv\" (UniqueName: \"kubernetes.io/projected/e91a5139-5537-4578-ab4f-67d52927afa9-kube-api-access-tldnv\") pod \"barbican-db-sync-64l6z\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.288665 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.289149 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hx7\" (UniqueName: \"kubernetes.io/projected/193104db-05bf-409d-88e5-17753e72f1b0-kube-api-access-v5hx7\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.289184 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-config\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.289358 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.289405 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.289439 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.290395 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.290818 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.291141 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.291372 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-config\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.291611 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.319689 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hx7\" (UniqueName: \"kubernetes.io/projected/193104db-05bf-409d-88e5-17753e72f1b0-kube-api-access-v5hx7\") pod \"dnsmasq-dns-5dc4fcdbc-8jvdk\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.393983 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.412653 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.413117 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.414818 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.417900 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.419629 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.419889 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-65j9f" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.420154 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.440563 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.494570 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.494617 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-logs\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.494683 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.494744 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.494781 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-config-data\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.494809 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bnd2\" (UniqueName: \"kubernetes.io/projected/56cc479a-b886-4bef-96c9-cee705f49225-kube-api-access-8bnd2\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.494832 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-scripts\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.494873 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.525173 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.527098 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.530469 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.532865 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.540323 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.573681 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.596669 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-logs\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.596725 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.596765 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.596811 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.596852 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-config-data\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.596879 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bnd2\" (UniqueName: \"kubernetes.io/projected/56cc479a-b886-4bef-96c9-cee705f49225-kube-api-access-8bnd2\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.596902 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-scripts\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.596935 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.597449 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-logs\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.598477 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.603622 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.604332 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.606106 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-config-data\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.615823 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.621015 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hmszp"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.626996 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bnd2\" (UniqueName: \"kubernetes.io/projected/56cc479a-b886-4bef-96c9-cee705f49225-kube-api-access-8bnd2\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.643018 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.645687 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-scripts\") pod \"glance-default-external-api-0\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.700243 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.700348 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7zl\" (UniqueName: \"kubernetes.io/projected/5c43714f-7be1-4db0-bbdd-96a8f788d315-kube-api-access-ht7zl\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.700422 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.700494 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.700524 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.700543 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.700578 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.700598 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.719571 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.755424 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xpnzz"] Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.802653 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-config\") pod \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803109 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b99s2\" (UniqueName: \"kubernetes.io/projected/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-kube-api-access-b99s2\") pod \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803192 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-swift-storage-0\") pod \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803262 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-nb\") pod \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803281 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-sb\") pod \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803297 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-svc\") pod \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\" (UID: \"6298c5c7-649d-4f0f-bd26-b4b84f37d53d\") " Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803671 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803739 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803771 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803792 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803833 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803854 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803904 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.803934 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht7zl\" (UniqueName: \"kubernetes.io/projected/5c43714f-7be1-4db0-bbdd-96a8f788d315-kube-api-access-ht7zl\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.804598 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.815195 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.816263 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.824512 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-kube-api-access-b99s2" (OuterVolumeSpecName: "kube-api-access-b99s2") pod "6298c5c7-649d-4f0f-bd26-b4b84f37d53d" (UID: "6298c5c7-649d-4f0f-bd26-b4b84f37d53d"). InnerVolumeSpecName "kube-api-access-b99s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.841718 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.841889 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.843562 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.858621 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht7zl\" (UniqueName: \"kubernetes.io/projected/5c43714f-7be1-4db0-bbdd-96a8f788d315-kube-api-access-ht7zl\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.863983 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.901693 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.908215 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b99s2\" (UniqueName: \"kubernetes.io/projected/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-kube-api-access-b99s2\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.912070 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dkbzg"] Jan 05 22:12:05 crc kubenswrapper[5034]: W0105 22:12:05.923387 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32ab947b_542c_4bd4_a4e9_493332d7caf5.slice/crio-4b7019e03c4f9ccb1a2f15ea35122e5398d70a2fa6baf9846981fe80dbd573d7 WatchSource:0}: Error finding container 4b7019e03c4f9ccb1a2f15ea35122e5398d70a2fa6baf9846981fe80dbd573d7: Status 404 returned error can't find the container with id 4b7019e03c4f9ccb1a2f15ea35122e5398d70a2fa6baf9846981fe80dbd573d7 Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.926564 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.962373 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dkbzg" event={"ID":"32ab947b-542c-4bd4-a4e9-493332d7caf5","Type":"ContainerStarted","Data":"4b7019e03c4f9ccb1a2f15ea35122e5398d70a2fa6baf9846981fe80dbd573d7"} Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.967894 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hmszp" event={"ID":"fea94cbd-19d3-4e67-8104-2de9763e5035","Type":"ContainerStarted","Data":"2ac74443a30f2b053309859e00383e74acbc8ff982087bd61d53e7214f800afc"} Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.979611 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" event={"ID":"6298c5c7-649d-4f0f-bd26-b4b84f37d53d","Type":"ContainerDied","Data":"8b833a1eccf67d211c6c947ac2491251a1b9f12a622b0b2974f52f18826fc8a0"} Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.979709 5034 scope.go:117] "RemoveContainer" containerID="0a346af9b96064ea27c7014575e6912ab244dbf1361e54725182abb31ee39e46" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.979922 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-fqv8q" Jan 05 22:12:05 crc kubenswrapper[5034]: I0105 22:12:05.984157 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" event={"ID":"7a9f3bbc-2012-403e-8772-1115579feebf","Type":"ContainerStarted","Data":"c156976d7e7d0818cb5dba479b3d70993eedb294b73f36a8f7d04827a5a029f8"} Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.045977 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6298c5c7-649d-4f0f-bd26-b4b84f37d53d" (UID: "6298c5c7-649d-4f0f-bd26-b4b84f37d53d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.046029 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6298c5c7-649d-4f0f-bd26-b4b84f37d53d" (UID: "6298c5c7-649d-4f0f-bd26-b4b84f37d53d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.046648 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-config" (OuterVolumeSpecName: "config") pod "6298c5c7-649d-4f0f-bd26-b4b84f37d53d" (UID: "6298c5c7-649d-4f0f-bd26-b4b84f37d53d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.082311 5034 scope.go:117] "RemoveContainer" containerID="04c9b8a51d04b9d83a8455cb5e2ae087c20cce6dc088e72a9aab30926faf0d15" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.082937 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6298c5c7-649d-4f0f-bd26-b4b84f37d53d" (UID: "6298c5c7-649d-4f0f-bd26-b4b84f37d53d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.083445 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6298c5c7-649d-4f0f-bd26-b4b84f37d53d" (UID: "6298c5c7-649d-4f0f-bd26-b4b84f37d53d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.100714 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-sh52t"] Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.131130 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.131152 5034 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.131163 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.131174 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.131183 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6298c5c7-649d-4f0f-bd26-b4b84f37d53d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.171653 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-64l6z"] Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.179810 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:12:06 crc kubenswrapper[5034]: W0105 22:12:06.194237 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode91a5139_5537_4578_ab4f_67d52927afa9.slice/crio-bcd903d0aae433a1183f358671e8e3e57ee2ca2235e26f2aa070d19187149f7e WatchSource:0}: Error finding container bcd903d0aae433a1183f358671e8e3e57ee2ca2235e26f2aa070d19187149f7e: Status 404 returned error can't find the container with id bcd903d0aae433a1183f358671e8e3e57ee2ca2235e26f2aa070d19187149f7e Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.226763 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.424386 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-htrf9"] Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.437654 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-8jvdk"] Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.531855 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.637530 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-fqv8q"] Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.650519 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-fqv8q"] Jan 05 22:12:06 crc kubenswrapper[5034]: I0105 22:12:06.894032 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.001984 5034 generic.go:334] "Generic (PLEG): container finished" podID="193104db-05bf-409d-88e5-17753e72f1b0" containerID="95edfa9c69e2135b21b9ca62069ec17eadd79ca4c8dea12255126300eb4ff795" exitCode=0 Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.003198 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" event={"ID":"193104db-05bf-409d-88e5-17753e72f1b0","Type":"ContainerDied","Data":"95edfa9c69e2135b21b9ca62069ec17eadd79ca4c8dea12255126300eb4ff795"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.003228 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" event={"ID":"193104db-05bf-409d-88e5-17753e72f1b0","Type":"ContainerStarted","Data":"2fb915bb9c3c3a9f5d9b2648ccf3ea0b55f2339fadb2a40e3d4c528c09e731ea"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.014839 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-64l6z" event={"ID":"e91a5139-5537-4578-ab4f-67d52927afa9","Type":"ContainerStarted","Data":"bcd903d0aae433a1183f358671e8e3e57ee2ca2235e26f2aa070d19187149f7e"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.017634 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sh52t" event={"ID":"93443f38-a401-43ed-8ba6-7e0ebef66eb5","Type":"ContainerStarted","Data":"c9950fd583d61effaa59c817d8fcfc3604023b86e234808b3855806573b5d225"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.050168 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c43714f-7be1-4db0-bbdd-96a8f788d315","Type":"ContainerStarted","Data":"d5e7c156145c07a011a8f51477d51bb6998d89202788bd5d468c68bc4a7abdc5"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.054663 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htrf9" event={"ID":"7ad742f5-9855-40e9-953f-fc2cf3baee89","Type":"ContainerStarted","Data":"393c6db0c42826d3169e3fcb512818896bf5358f956d37a41209def900461d2e"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.061118 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hmszp" event={"ID":"fea94cbd-19d3-4e67-8104-2de9763e5035","Type":"ContainerStarted","Data":"b57af3a96d64d5830a8e96cc8238b65c1caa9a61ff64a604f65a7d792403e5f5"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.069090 5034 generic.go:334] "Generic (PLEG): container finished" podID="7a9f3bbc-2012-403e-8772-1115579feebf" containerID="5c3a7038dafc175716615b671b4690af7803c285b070b1902defa7714e7f0085" exitCode=0 Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.069148 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" event={"ID":"7a9f3bbc-2012-403e-8772-1115579feebf","Type":"ContainerDied","Data":"5c3a7038dafc175716615b671b4690af7803c285b070b1902defa7714e7f0085"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.073229 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac333e19-4263-460f-8fe5-d950677ef64f","Type":"ContainerStarted","Data":"7fab7983c003e6555c8aa8681dd55dd4827dab9898e12c28a207e008c3c93457"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.076883 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dkbzg" event={"ID":"32ab947b-542c-4bd4-a4e9-493332d7caf5","Type":"ContainerStarted","Data":"3c442e4078594f8e149be7d1516488eb0dc3ab8b56a67057b3cba6e43abc37dd"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.094294 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56cc479a-b886-4bef-96c9-cee705f49225","Type":"ContainerStarted","Data":"6b01a20abb59b834528caa5b83b89bdf506cd368db7e76fac056c3f77224dfbe"} Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.095001 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hmszp" podStartSLOduration=3.094973757 podStartE2EDuration="3.094973757s" podCreationTimestamp="2026-01-05 22:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:07.083263155 +0000 UTC m=+1219.455262594" watchObservedRunningTime="2026-01-05 22:12:07.094973757 +0000 UTC m=+1219.466973196" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.118762 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-dkbzg" podStartSLOduration=3.118744461 podStartE2EDuration="3.118744461s" podCreationTimestamp="2026-01-05 22:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:07.102546822 +0000 UTC m=+1219.474546261" watchObservedRunningTime="2026-01-05 22:12:07.118744461 +0000 UTC m=+1219.490743900" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.311993 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.446955 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.595354 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.744054 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.794280 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45w4h\" (UniqueName: \"kubernetes.io/projected/7a9f3bbc-2012-403e-8772-1115579feebf-kube-api-access-45w4h\") pod \"7a9f3bbc-2012-403e-8772-1115579feebf\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.794332 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-svc\") pod \"7a9f3bbc-2012-403e-8772-1115579feebf\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.794393 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-swift-storage-0\") pod \"7a9f3bbc-2012-403e-8772-1115579feebf\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.794425 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-sb\") pod \"7a9f3bbc-2012-403e-8772-1115579feebf\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.794493 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-nb\") pod \"7a9f3bbc-2012-403e-8772-1115579feebf\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.794587 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-config\") pod \"7a9f3bbc-2012-403e-8772-1115579feebf\" (UID: \"7a9f3bbc-2012-403e-8772-1115579feebf\") " Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.849911 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9f3bbc-2012-403e-8772-1115579feebf-kube-api-access-45w4h" (OuterVolumeSpecName: "kube-api-access-45w4h") pod "7a9f3bbc-2012-403e-8772-1115579feebf" (UID: "7a9f3bbc-2012-403e-8772-1115579feebf"). InnerVolumeSpecName "kube-api-access-45w4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.879334 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6298c5c7-649d-4f0f-bd26-b4b84f37d53d" path="/var/lib/kubelet/pods/6298c5c7-649d-4f0f-bd26-b4b84f37d53d/volumes" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.896628 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45w4h\" (UniqueName: \"kubernetes.io/projected/7a9f3bbc-2012-403e-8772-1115579feebf-kube-api-access-45w4h\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.962102 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7a9f3bbc-2012-403e-8772-1115579feebf" (UID: "7a9f3bbc-2012-403e-8772-1115579feebf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.962143 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7a9f3bbc-2012-403e-8772-1115579feebf" (UID: "7a9f3bbc-2012-403e-8772-1115579feebf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.963578 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7a9f3bbc-2012-403e-8772-1115579feebf" (UID: "7a9f3bbc-2012-403e-8772-1115579feebf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.981839 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-config" (OuterVolumeSpecName: "config") pod "7a9f3bbc-2012-403e-8772-1115579feebf" (UID: "7a9f3bbc-2012-403e-8772-1115579feebf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.982389 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7a9f3bbc-2012-403e-8772-1115579feebf" (UID: "7a9f3bbc-2012-403e-8772-1115579feebf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.998316 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.998364 5034 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.998377 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.998385 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:07 crc kubenswrapper[5034]: I0105 22:12:07.998397 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9f3bbc-2012-403e-8772-1115579feebf-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:08 crc kubenswrapper[5034]: I0105 22:12:08.129970 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" event={"ID":"193104db-05bf-409d-88e5-17753e72f1b0","Type":"ContainerStarted","Data":"9ef22f78caba23264c42b8edcef4887fa1515322e3b9576c99272cea78bc2bdb"} Jan 05 22:12:08 crc kubenswrapper[5034]: I0105 22:12:08.131110 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:08 crc kubenswrapper[5034]: I0105 22:12:08.152999 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" Jan 05 22:12:08 crc kubenswrapper[5034]: I0105 22:12:08.152984 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-xpnzz" event={"ID":"7a9f3bbc-2012-403e-8772-1115579feebf","Type":"ContainerDied","Data":"c156976d7e7d0818cb5dba479b3d70993eedb294b73f36a8f7d04827a5a029f8"} Jan 05 22:12:08 crc kubenswrapper[5034]: I0105 22:12:08.153150 5034 scope.go:117] "RemoveContainer" containerID="5c3a7038dafc175716615b671b4690af7803c285b070b1902defa7714e7f0085" Jan 05 22:12:08 crc kubenswrapper[5034]: I0105 22:12:08.155181 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" podStartSLOduration=4.155164588 podStartE2EDuration="4.155164588s" podCreationTimestamp="2026-01-05 22:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:08.152214454 +0000 UTC m=+1220.524213913" watchObservedRunningTime="2026-01-05 22:12:08.155164588 +0000 UTC m=+1220.527164027" Jan 05 22:12:08 crc kubenswrapper[5034]: I0105 22:12:08.167836 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56cc479a-b886-4bef-96c9-cee705f49225","Type":"ContainerStarted","Data":"fbc29ee973d0ab3ca46b02faf42d4841eced68ab10da4737c6a3b811a542f567"} Jan 05 22:12:08 crc kubenswrapper[5034]: I0105 22:12:08.278965 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xpnzz"] Jan 05 22:12:08 crc kubenswrapper[5034]: I0105 22:12:08.291642 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xpnzz"] Jan 05 22:12:09 crc kubenswrapper[5034]: I0105 22:12:09.183298 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56cc479a-b886-4bef-96c9-cee705f49225","Type":"ContainerStarted","Data":"5010ff537cb0577f4c1bb5c4add45fa1c37b349be80bb25ed5edf3d04fa5f527"} Jan 05 22:12:09 crc kubenswrapper[5034]: I0105 22:12:09.183614 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="56cc479a-b886-4bef-96c9-cee705f49225" containerName="glance-log" containerID="cri-o://fbc29ee973d0ab3ca46b02faf42d4841eced68ab10da4737c6a3b811a542f567" gracePeriod=30 Jan 05 22:12:09 crc kubenswrapper[5034]: I0105 22:12:09.183941 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="56cc479a-b886-4bef-96c9-cee705f49225" containerName="glance-httpd" containerID="cri-o://5010ff537cb0577f4c1bb5c4add45fa1c37b349be80bb25ed5edf3d04fa5f527" gracePeriod=30 Jan 05 22:12:09 crc kubenswrapper[5034]: I0105 22:12:09.207893 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c43714f-7be1-4db0-bbdd-96a8f788d315","Type":"ContainerStarted","Data":"483ec52faed31c964e18112da2087591ef7afa754a8eb25dda6592e8381a2d0b"} Jan 05 22:12:09 crc kubenswrapper[5034]: I0105 22:12:09.216817 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.21679683 podStartE2EDuration="5.21679683s" podCreationTimestamp="2026-01-05 22:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:09.20974071 +0000 UTC m=+1221.581740179" watchObservedRunningTime="2026-01-05 22:12:09.21679683 +0000 UTC m=+1221.588796259" Jan 05 22:12:09 crc kubenswrapper[5034]: I0105 22:12:09.851665 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9f3bbc-2012-403e-8772-1115579feebf" path="/var/lib/kubelet/pods/7a9f3bbc-2012-403e-8772-1115579feebf/volumes" Jan 05 22:12:10 crc kubenswrapper[5034]: I0105 22:12:10.245557 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c43714f-7be1-4db0-bbdd-96a8f788d315","Type":"ContainerStarted","Data":"22cfc639996ff95483f8aff520661c65ae1d44df0452a671ba1e8736323f5590"} Jan 05 22:12:10 crc kubenswrapper[5034]: I0105 22:12:10.245757 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5c43714f-7be1-4db0-bbdd-96a8f788d315" containerName="glance-log" containerID="cri-o://483ec52faed31c964e18112da2087591ef7afa754a8eb25dda6592e8381a2d0b" gracePeriod=30 Jan 05 22:12:10 crc kubenswrapper[5034]: I0105 22:12:10.246169 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5c43714f-7be1-4db0-bbdd-96a8f788d315" containerName="glance-httpd" containerID="cri-o://22cfc639996ff95483f8aff520661c65ae1d44df0452a671ba1e8736323f5590" gracePeriod=30 Jan 05 22:12:10 crc kubenswrapper[5034]: I0105 22:12:10.252704 5034 generic.go:334] "Generic (PLEG): container finished" podID="56cc479a-b886-4bef-96c9-cee705f49225" containerID="5010ff537cb0577f4c1bb5c4add45fa1c37b349be80bb25ed5edf3d04fa5f527" exitCode=143 Jan 05 22:12:10 crc kubenswrapper[5034]: I0105 22:12:10.253137 5034 generic.go:334] "Generic (PLEG): container finished" podID="56cc479a-b886-4bef-96c9-cee705f49225" containerID="fbc29ee973d0ab3ca46b02faf42d4841eced68ab10da4737c6a3b811a542f567" exitCode=143 Jan 05 22:12:10 crc kubenswrapper[5034]: I0105 22:12:10.253049 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56cc479a-b886-4bef-96c9-cee705f49225","Type":"ContainerDied","Data":"5010ff537cb0577f4c1bb5c4add45fa1c37b349be80bb25ed5edf3d04fa5f527"} Jan 05 22:12:10 crc kubenswrapper[5034]: I0105 22:12:10.253192 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56cc479a-b886-4bef-96c9-cee705f49225","Type":"ContainerDied","Data":"fbc29ee973d0ab3ca46b02faf42d4841eced68ab10da4737c6a3b811a542f567"} Jan 05 22:12:10 crc kubenswrapper[5034]: I0105 22:12:10.282159 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.282132255 podStartE2EDuration="6.282132255s" podCreationTimestamp="2026-01-05 22:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:10.268613462 +0000 UTC m=+1222.640612891" watchObservedRunningTime="2026-01-05 22:12:10.282132255 +0000 UTC m=+1222.654131694" Jan 05 22:12:11 crc kubenswrapper[5034]: I0105 22:12:11.265757 5034 generic.go:334] "Generic (PLEG): container finished" podID="5c43714f-7be1-4db0-bbdd-96a8f788d315" containerID="22cfc639996ff95483f8aff520661c65ae1d44df0452a671ba1e8736323f5590" exitCode=0 Jan 05 22:12:11 crc kubenswrapper[5034]: I0105 22:12:11.265792 5034 generic.go:334] "Generic (PLEG): container finished" podID="5c43714f-7be1-4db0-bbdd-96a8f788d315" containerID="483ec52faed31c964e18112da2087591ef7afa754a8eb25dda6592e8381a2d0b" exitCode=143 Jan 05 22:12:11 crc kubenswrapper[5034]: I0105 22:12:11.265852 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c43714f-7be1-4db0-bbdd-96a8f788d315","Type":"ContainerDied","Data":"22cfc639996ff95483f8aff520661c65ae1d44df0452a671ba1e8736323f5590"} Jan 05 22:12:11 crc kubenswrapper[5034]: I0105 22:12:11.266181 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c43714f-7be1-4db0-bbdd-96a8f788d315","Type":"ContainerDied","Data":"483ec52faed31c964e18112da2087591ef7afa754a8eb25dda6592e8381a2d0b"} Jan 05 22:12:11 crc kubenswrapper[5034]: I0105 22:12:11.272656 5034 generic.go:334] "Generic (PLEG): container finished" podID="fea94cbd-19d3-4e67-8104-2de9763e5035" containerID="b57af3a96d64d5830a8e96cc8238b65c1caa9a61ff64a604f65a7d792403e5f5" exitCode=0 Jan 05 22:12:11 crc kubenswrapper[5034]: I0105 22:12:11.272847 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hmszp" event={"ID":"fea94cbd-19d3-4e67-8104-2de9763e5035","Type":"ContainerDied","Data":"b57af3a96d64d5830a8e96cc8238b65c1caa9a61ff64a604f65a7d792403e5f5"} Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.615495 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.672054 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwlxj\" (UniqueName: \"kubernetes.io/projected/fea94cbd-19d3-4e67-8104-2de9763e5035-kube-api-access-kwlxj\") pod \"fea94cbd-19d3-4e67-8104-2de9763e5035\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.672126 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-scripts\") pod \"fea94cbd-19d3-4e67-8104-2de9763e5035\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.672192 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-config-data\") pod \"fea94cbd-19d3-4e67-8104-2de9763e5035\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.672229 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-fernet-keys\") pod \"fea94cbd-19d3-4e67-8104-2de9763e5035\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.672320 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-combined-ca-bundle\") pod \"fea94cbd-19d3-4e67-8104-2de9763e5035\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.672373 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-credential-keys\") pod \"fea94cbd-19d3-4e67-8104-2de9763e5035\" (UID: \"fea94cbd-19d3-4e67-8104-2de9763e5035\") " Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.680442 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea94cbd-19d3-4e67-8104-2de9763e5035-kube-api-access-kwlxj" (OuterVolumeSpecName: "kube-api-access-kwlxj") pod "fea94cbd-19d3-4e67-8104-2de9763e5035" (UID: "fea94cbd-19d3-4e67-8104-2de9763e5035"). InnerVolumeSpecName "kube-api-access-kwlxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.693548 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fea94cbd-19d3-4e67-8104-2de9763e5035" (UID: "fea94cbd-19d3-4e67-8104-2de9763e5035"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.696553 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-scripts" (OuterVolumeSpecName: "scripts") pod "fea94cbd-19d3-4e67-8104-2de9763e5035" (UID: "fea94cbd-19d3-4e67-8104-2de9763e5035"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.696796 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fea94cbd-19d3-4e67-8104-2de9763e5035" (UID: "fea94cbd-19d3-4e67-8104-2de9763e5035"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.724170 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fea94cbd-19d3-4e67-8104-2de9763e5035" (UID: "fea94cbd-19d3-4e67-8104-2de9763e5035"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.743742 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-config-data" (OuterVolumeSpecName: "config-data") pod "fea94cbd-19d3-4e67-8104-2de9763e5035" (UID: "fea94cbd-19d3-4e67-8104-2de9763e5035"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.775003 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.775048 5034 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.775059 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwlxj\" (UniqueName: \"kubernetes.io/projected/fea94cbd-19d3-4e67-8104-2de9763e5035-kube-api-access-kwlxj\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.775087 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.775099 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:13 crc kubenswrapper[5034]: I0105 22:12:13.775108 5034 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fea94cbd-19d3-4e67-8104-2de9763e5035-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.326274 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hmszp" event={"ID":"fea94cbd-19d3-4e67-8104-2de9763e5035","Type":"ContainerDied","Data":"2ac74443a30f2b053309859e00383e74acbc8ff982087bd61d53e7214f800afc"} Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.326924 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hmszp" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.326949 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ac74443a30f2b053309859e00383e74acbc8ff982087bd61d53e7214f800afc" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.805349 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hmszp"] Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.815519 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hmszp"] Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.887153 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zb7v6"] Jan 05 22:12:14 crc kubenswrapper[5034]: E0105 22:12:14.887824 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea94cbd-19d3-4e67-8104-2de9763e5035" containerName="keystone-bootstrap" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.887847 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea94cbd-19d3-4e67-8104-2de9763e5035" containerName="keystone-bootstrap" Jan 05 22:12:14 crc kubenswrapper[5034]: E0105 22:12:14.887861 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6298c5c7-649d-4f0f-bd26-b4b84f37d53d" containerName="init" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.887868 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6298c5c7-649d-4f0f-bd26-b4b84f37d53d" containerName="init" Jan 05 22:12:14 crc kubenswrapper[5034]: E0105 22:12:14.887902 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6298c5c7-649d-4f0f-bd26-b4b84f37d53d" containerName="dnsmasq-dns" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.887914 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6298c5c7-649d-4f0f-bd26-b4b84f37d53d" containerName="dnsmasq-dns" Jan 05 22:12:14 crc kubenswrapper[5034]: E0105 22:12:14.887931 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9f3bbc-2012-403e-8772-1115579feebf" containerName="init" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.887938 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9f3bbc-2012-403e-8772-1115579feebf" containerName="init" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.888139 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea94cbd-19d3-4e67-8104-2de9763e5035" containerName="keystone-bootstrap" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.888153 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9f3bbc-2012-403e-8772-1115579feebf" containerName="init" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.888170 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6298c5c7-649d-4f0f-bd26-b4b84f37d53d" containerName="dnsmasq-dns" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.888955 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.892813 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.893090 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.893210 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.893637 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.893699 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-62wxh" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.896904 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zb7v6"] Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.996794 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-credential-keys\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.996893 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-scripts\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.996971 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-combined-ca-bundle\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.997004 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wqtj\" (UniqueName: \"kubernetes.io/projected/29d670c8-6fba-43a1-a8e8-9bca9742792d-kube-api-access-5wqtj\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.997045 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-config-data\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:14 crc kubenswrapper[5034]: I0105 22:12:14.997064 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-fernet-keys\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.099371 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-config-data\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.099441 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-fernet-keys\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.099484 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-credential-keys\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.099536 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-scripts\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.099617 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-combined-ca-bundle\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.099655 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wqtj\" (UniqueName: \"kubernetes.io/projected/29d670c8-6fba-43a1-a8e8-9bca9742792d-kube-api-access-5wqtj\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.115089 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-config-data\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.115200 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-fernet-keys\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.116599 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-credential-keys\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.118956 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-scripts\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.119167 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-combined-ca-bundle\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.119859 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wqtj\" (UniqueName: \"kubernetes.io/projected/29d670c8-6fba-43a1-a8e8-9bca9742792d-kube-api-access-5wqtj\") pod \"keystone-bootstrap-zb7v6\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.217551 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.575366 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.646136 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-p4hrh"] Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.646435 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" podUID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerName="dnsmasq-dns" containerID="cri-o://f39e48f2802b2ac2efe4275c7ddb8cb739d20126bfdde36345bd13a09f2eae54" gracePeriod=10 Jan 05 22:12:15 crc kubenswrapper[5034]: I0105 22:12:15.850691 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea94cbd-19d3-4e67-8104-2de9763e5035" path="/var/lib/kubelet/pods/fea94cbd-19d3-4e67-8104-2de9763e5035/volumes" Jan 05 22:12:16 crc kubenswrapper[5034]: I0105 22:12:16.349846 5034 generic.go:334] "Generic (PLEG): container finished" podID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerID="f39e48f2802b2ac2efe4275c7ddb8cb739d20126bfdde36345bd13a09f2eae54" exitCode=0 Jan 05 22:12:16 crc kubenswrapper[5034]: I0105 22:12:16.349926 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" event={"ID":"994faa91-11c3-465e-9d3f-2bdbf2b84328","Type":"ContainerDied","Data":"f39e48f2802b2ac2efe4275c7ddb8cb739d20126bfdde36345bd13a09f2eae54"} Jan 05 22:12:17 crc kubenswrapper[5034]: I0105 22:12:17.228882 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" podUID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.548632 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.553533 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.684637 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"56cc479a-b886-4bef-96c9-cee705f49225\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.684710 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-internal-tls-certs\") pod \"5c43714f-7be1-4db0-bbdd-96a8f788d315\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.684809 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-scripts\") pod \"56cc479a-b886-4bef-96c9-cee705f49225\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.684838 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-public-tls-certs\") pod \"56cc479a-b886-4bef-96c9-cee705f49225\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.684862 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-logs\") pod \"5c43714f-7be1-4db0-bbdd-96a8f788d315\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.684884 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-combined-ca-bundle\") pod \"56cc479a-b886-4bef-96c9-cee705f49225\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.684910 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-logs\") pod \"56cc479a-b886-4bef-96c9-cee705f49225\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.684934 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bnd2\" (UniqueName: \"kubernetes.io/projected/56cc479a-b886-4bef-96c9-cee705f49225-kube-api-access-8bnd2\") pod \"56cc479a-b886-4bef-96c9-cee705f49225\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.684966 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-httpd-run\") pod \"56cc479a-b886-4bef-96c9-cee705f49225\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.685019 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-config-data\") pod \"5c43714f-7be1-4db0-bbdd-96a8f788d315\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.685043 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-combined-ca-bundle\") pod \"5c43714f-7be1-4db0-bbdd-96a8f788d315\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.686000 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-logs" (OuterVolumeSpecName: "logs") pod "56cc479a-b886-4bef-96c9-cee705f49225" (UID: "56cc479a-b886-4bef-96c9-cee705f49225"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.686186 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "56cc479a-b886-4bef-96c9-cee705f49225" (UID: "56cc479a-b886-4bef-96c9-cee705f49225"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.686213 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-config-data\") pod \"56cc479a-b886-4bef-96c9-cee705f49225\" (UID: \"56cc479a-b886-4bef-96c9-cee705f49225\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.686332 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"5c43714f-7be1-4db0-bbdd-96a8f788d315\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.686378 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-httpd-run\") pod \"5c43714f-7be1-4db0-bbdd-96a8f788d315\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.686409 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht7zl\" (UniqueName: \"kubernetes.io/projected/5c43714f-7be1-4db0-bbdd-96a8f788d315-kube-api-access-ht7zl\") pod \"5c43714f-7be1-4db0-bbdd-96a8f788d315\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.686420 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-logs" (OuterVolumeSpecName: "logs") pod "5c43714f-7be1-4db0-bbdd-96a8f788d315" (UID: "5c43714f-7be1-4db0-bbdd-96a8f788d315"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.686471 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-scripts\") pod \"5c43714f-7be1-4db0-bbdd-96a8f788d315\" (UID: \"5c43714f-7be1-4db0-bbdd-96a8f788d315\") " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.686703 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5c43714f-7be1-4db0-bbdd-96a8f788d315" (UID: "5c43714f-7be1-4db0-bbdd-96a8f788d315"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.687671 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.687691 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.687702 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56cc479a-b886-4bef-96c9-cee705f49225-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.687713 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c43714f-7be1-4db0-bbdd-96a8f788d315-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.692929 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "56cc479a-b886-4bef-96c9-cee705f49225" (UID: "56cc479a-b886-4bef-96c9-cee705f49225"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.692977 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56cc479a-b886-4bef-96c9-cee705f49225-kube-api-access-8bnd2" (OuterVolumeSpecName: "kube-api-access-8bnd2") pod "56cc479a-b886-4bef-96c9-cee705f49225" (UID: "56cc479a-b886-4bef-96c9-cee705f49225"). InnerVolumeSpecName "kube-api-access-8bnd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.694788 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c43714f-7be1-4db0-bbdd-96a8f788d315-kube-api-access-ht7zl" (OuterVolumeSpecName: "kube-api-access-ht7zl") pod "5c43714f-7be1-4db0-bbdd-96a8f788d315" (UID: "5c43714f-7be1-4db0-bbdd-96a8f788d315"). InnerVolumeSpecName "kube-api-access-ht7zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.694873 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-scripts" (OuterVolumeSpecName: "scripts") pod "5c43714f-7be1-4db0-bbdd-96a8f788d315" (UID: "5c43714f-7be1-4db0-bbdd-96a8f788d315"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.695256 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-scripts" (OuterVolumeSpecName: "scripts") pod "56cc479a-b886-4bef-96c9-cee705f49225" (UID: "56cc479a-b886-4bef-96c9-cee705f49225"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.695533 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "5c43714f-7be1-4db0-bbdd-96a8f788d315" (UID: "5c43714f-7be1-4db0-bbdd-96a8f788d315"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.722391 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56cc479a-b886-4bef-96c9-cee705f49225" (UID: "56cc479a-b886-4bef-96c9-cee705f49225"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.731666 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c43714f-7be1-4db0-bbdd-96a8f788d315" (UID: "5c43714f-7be1-4db0-bbdd-96a8f788d315"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.744132 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-config-data" (OuterVolumeSpecName: "config-data") pod "56cc479a-b886-4bef-96c9-cee705f49225" (UID: "56cc479a-b886-4bef-96c9-cee705f49225"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.746849 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c43714f-7be1-4db0-bbdd-96a8f788d315" (UID: "5c43714f-7be1-4db0-bbdd-96a8f788d315"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.747934 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "56cc479a-b886-4bef-96c9-cee705f49225" (UID: "56cc479a-b886-4bef-96c9-cee705f49225"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.754937 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-config-data" (OuterVolumeSpecName: "config-data") pod "5c43714f-7be1-4db0-bbdd-96a8f788d315" (UID: "5c43714f-7be1-4db0-bbdd-96a8f788d315"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789296 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789339 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789353 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789364 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bnd2\" (UniqueName: \"kubernetes.io/projected/56cc479a-b886-4bef-96c9-cee705f49225-kube-api-access-8bnd2\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789375 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789386 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789395 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56cc479a-b886-4bef-96c9-cee705f49225-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789433 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789442 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht7zl\" (UniqueName: \"kubernetes.io/projected/5c43714f-7be1-4db0-bbdd-96a8f788d315-kube-api-access-ht7zl\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789451 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789464 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.789472 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c43714f-7be1-4db0-bbdd-96a8f788d315-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.809552 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.810355 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.891000 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:18 crc kubenswrapper[5034]: I0105 22:12:18.891051 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.385356 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.385603 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c43714f-7be1-4db0-bbdd-96a8f788d315","Type":"ContainerDied","Data":"d5e7c156145c07a011a8f51477d51bb6998d89202788bd5d468c68bc4a7abdc5"} Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.386267 5034 scope.go:117] "RemoveContainer" containerID="22cfc639996ff95483f8aff520661c65ae1d44df0452a671ba1e8736323f5590" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.391710 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56cc479a-b886-4bef-96c9-cee705f49225","Type":"ContainerDied","Data":"6b01a20abb59b834528caa5b83b89bdf506cd368db7e76fac056c3f77224dfbe"} Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.391817 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.463556 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.489121 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.516151 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.527305 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.539011 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:12:19 crc kubenswrapper[5034]: E0105 22:12:19.539515 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56cc479a-b886-4bef-96c9-cee705f49225" containerName="glance-log" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.539536 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="56cc479a-b886-4bef-96c9-cee705f49225" containerName="glance-log" Jan 05 22:12:19 crc kubenswrapper[5034]: E0105 22:12:19.539560 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c43714f-7be1-4db0-bbdd-96a8f788d315" containerName="glance-log" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.539570 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c43714f-7be1-4db0-bbdd-96a8f788d315" containerName="glance-log" Jan 05 22:12:19 crc kubenswrapper[5034]: E0105 22:12:19.539587 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56cc479a-b886-4bef-96c9-cee705f49225" containerName="glance-httpd" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.539594 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="56cc479a-b886-4bef-96c9-cee705f49225" containerName="glance-httpd" Jan 05 22:12:19 crc kubenswrapper[5034]: E0105 22:12:19.539610 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c43714f-7be1-4db0-bbdd-96a8f788d315" containerName="glance-httpd" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.539625 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c43714f-7be1-4db0-bbdd-96a8f788d315" containerName="glance-httpd" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.539860 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c43714f-7be1-4db0-bbdd-96a8f788d315" containerName="glance-log" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.539882 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c43714f-7be1-4db0-bbdd-96a8f788d315" containerName="glance-httpd" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.539900 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="56cc479a-b886-4bef-96c9-cee705f49225" containerName="glance-log" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.539916 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="56cc479a-b886-4bef-96c9-cee705f49225" containerName="glance-httpd" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.541329 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.544250 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-65j9f" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.544513 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.544524 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.544985 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.553809 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.563733 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.570313 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.575438 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.575690 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.581256 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.605153 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.605200 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.605280 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.605371 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89qgt\" (UniqueName: \"kubernetes.io/projected/2ca0033b-cb51-4e99-83f5-da165dbaf071-kube-api-access-89qgt\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.605402 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.605430 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.605470 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.605501 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.706875 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.706977 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-scripts\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707012 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707038 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707101 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707137 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707174 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707202 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707226 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-logs\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707773 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfn2\" (UniqueName: \"kubernetes.io/projected/760b010c-9b6e-4ed6-8ae0-9af72816c192-kube-api-access-5hfn2\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707835 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89qgt\" (UniqueName: \"kubernetes.io/projected/2ca0033b-cb51-4e99-83f5-da165dbaf071-kube-api-access-89qgt\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707878 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707915 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707960 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.707983 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.708244 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.708311 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-config-data\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.708361 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.708411 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.714059 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.716621 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.719277 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.727878 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.732332 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89qgt\" (UniqueName: \"kubernetes.io/projected/2ca0033b-cb51-4e99-83f5-da165dbaf071-kube-api-access-89qgt\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.742649 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.809972 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-config-data\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.811156 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.811207 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-scripts\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.811247 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.811279 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.811301 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.811329 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-logs\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.811368 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hfn2\" (UniqueName: \"kubernetes.io/projected/760b010c-9b6e-4ed6-8ae0-9af72816c192-kube-api-access-5hfn2\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.812939 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.813299 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-logs\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.813505 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.816971 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.817443 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-scripts\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.817816 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-config-data\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.818016 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.830535 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hfn2\" (UniqueName: \"kubernetes.io/projected/760b010c-9b6e-4ed6-8ae0-9af72816c192-kube-api-access-5hfn2\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.840314 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " pod="openstack/glance-default-external-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.851546 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56cc479a-b886-4bef-96c9-cee705f49225" path="/var/lib/kubelet/pods/56cc479a-b886-4bef-96c9-cee705f49225/volumes" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.854935 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c43714f-7be1-4db0-bbdd-96a8f788d315" path="/var/lib/kubelet/pods/5c43714f-7be1-4db0-bbdd-96a8f788d315/volumes" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.865936 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:19 crc kubenswrapper[5034]: I0105 22:12:19.884306 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:12:22 crc kubenswrapper[5034]: I0105 22:12:22.229544 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" podUID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 05 22:12:25 crc kubenswrapper[5034]: I0105 22:12:25.460737 5034 generic.go:334] "Generic (PLEG): container finished" podID="32ab947b-542c-4bd4-a4e9-493332d7caf5" containerID="3c442e4078594f8e149be7d1516488eb0dc3ab8b56a67057b3cba6e43abc37dd" exitCode=0 Jan 05 22:12:25 crc kubenswrapper[5034]: I0105 22:12:25.460910 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dkbzg" event={"ID":"32ab947b-542c-4bd4-a4e9-493332d7caf5","Type":"ContainerDied","Data":"3c442e4078594f8e149be7d1516488eb0dc3ab8b56a67057b3cba6e43abc37dd"} Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.465405 5034 scope.go:117] "RemoveContainer" containerID="483ec52faed31c964e18112da2087591ef7afa754a8eb25dda6592e8381a2d0b" Jan 05 22:12:27 crc kubenswrapper[5034]: E0105 22:12:27.476039 5034 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 05 22:12:27 crc kubenswrapper[5034]: E0105 22:12:27.476283 5034 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4kdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-sh52t_openstack(93443f38-a401-43ed-8ba6-7e0ebef66eb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:12:27 crc kubenswrapper[5034]: E0105 22:12:27.478299 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-sh52t" podUID="93443f38-a401-43ed-8ba6-7e0ebef66eb5" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.491805 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" event={"ID":"994faa91-11c3-465e-9d3f-2bdbf2b84328","Type":"ContainerDied","Data":"9523796f50b6b1cf713b3a2a765163fc19514f0fbe2319277613c9e2cd047dd5"} Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.491878 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9523796f50b6b1cf713b3a2a765163fc19514f0fbe2319277613c9e2cd047dd5" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.494204 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dkbzg" event={"ID":"32ab947b-542c-4bd4-a4e9-493332d7caf5","Type":"ContainerDied","Data":"4b7019e03c4f9ccb1a2f15ea35122e5398d70a2fa6baf9846981fe80dbd573d7"} Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.494243 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b7019e03c4f9ccb1a2f15ea35122e5398d70a2fa6baf9846981fe80dbd573d7" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.621086 5034 scope.go:117] "RemoveContainer" containerID="5010ff537cb0577f4c1bb5c4add45fa1c37b349be80bb25ed5edf3d04fa5f527" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.710230 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.715195 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b688b\" (UniqueName: \"kubernetes.io/projected/994faa91-11c3-465e-9d3f-2bdbf2b84328-kube-api-access-b688b\") pod \"994faa91-11c3-465e-9d3f-2bdbf2b84328\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.715470 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-nb\") pod \"994faa91-11c3-465e-9d3f-2bdbf2b84328\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.715551 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-sb\") pod \"994faa91-11c3-465e-9d3f-2bdbf2b84328\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.715588 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-config\") pod \"994faa91-11c3-465e-9d3f-2bdbf2b84328\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.715632 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-dns-svc\") pod \"994faa91-11c3-465e-9d3f-2bdbf2b84328\" (UID: \"994faa91-11c3-465e-9d3f-2bdbf2b84328\") " Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.732648 5034 scope.go:117] "RemoveContainer" containerID="fbc29ee973d0ab3ca46b02faf42d4841eced68ab10da4737c6a3b811a542f567" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.734439 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994faa91-11c3-465e-9d3f-2bdbf2b84328-kube-api-access-b688b" (OuterVolumeSpecName: "kube-api-access-b688b") pod "994faa91-11c3-465e-9d3f-2bdbf2b84328" (UID: "994faa91-11c3-465e-9d3f-2bdbf2b84328"). InnerVolumeSpecName "kube-api-access-b688b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.809510 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "994faa91-11c3-465e-9d3f-2bdbf2b84328" (UID: "994faa91-11c3-465e-9d3f-2bdbf2b84328"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.820062 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b688b\" (UniqueName: \"kubernetes.io/projected/994faa91-11c3-465e-9d3f-2bdbf2b84328-kube-api-access-b688b\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.820450 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.828173 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "994faa91-11c3-465e-9d3f-2bdbf2b84328" (UID: "994faa91-11c3-465e-9d3f-2bdbf2b84328"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.832114 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "994faa91-11c3-465e-9d3f-2bdbf2b84328" (UID: "994faa91-11c3-465e-9d3f-2bdbf2b84328"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.845896 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-config" (OuterVolumeSpecName: "config") pod "994faa91-11c3-465e-9d3f-2bdbf2b84328" (UID: "994faa91-11c3-465e-9d3f-2bdbf2b84328"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.849525 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.920904 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm29b\" (UniqueName: \"kubernetes.io/projected/32ab947b-542c-4bd4-a4e9-493332d7caf5-kube-api-access-jm29b\") pod \"32ab947b-542c-4bd4-a4e9-493332d7caf5\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.921031 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-combined-ca-bundle\") pod \"32ab947b-542c-4bd4-a4e9-493332d7caf5\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.921064 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-config\") pod \"32ab947b-542c-4bd4-a4e9-493332d7caf5\" (UID: \"32ab947b-542c-4bd4-a4e9-493332d7caf5\") " Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.921336 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.921350 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.921359 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994faa91-11c3-465e-9d3f-2bdbf2b84328-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.943284 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ab947b-542c-4bd4-a4e9-493332d7caf5-kube-api-access-jm29b" (OuterVolumeSpecName: "kube-api-access-jm29b") pod "32ab947b-542c-4bd4-a4e9-493332d7caf5" (UID: "32ab947b-542c-4bd4-a4e9-493332d7caf5"). InnerVolumeSpecName "kube-api-access-jm29b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.974008 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32ab947b-542c-4bd4-a4e9-493332d7caf5" (UID: "32ab947b-542c-4bd4-a4e9-493332d7caf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:27 crc kubenswrapper[5034]: I0105 22:12:27.975777 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-config" (OuterVolumeSpecName: "config") pod "32ab947b-542c-4bd4-a4e9-493332d7caf5" (UID: "32ab947b-542c-4bd4-a4e9-493332d7caf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.022826 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm29b\" (UniqueName: \"kubernetes.io/projected/32ab947b-542c-4bd4-a4e9-493332d7caf5-kube-api-access-jm29b\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.022865 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.022876 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/32ab947b-542c-4bd4-a4e9-493332d7caf5-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.045109 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zb7v6"] Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.135966 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.264324 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.516121 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ca0033b-cb51-4e99-83f5-da165dbaf071","Type":"ContainerStarted","Data":"6f9467df80d98371cd926be649409a442971ac2694e67b0e8432a8f3dca3cb68"} Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.517725 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htrf9" event={"ID":"7ad742f5-9855-40e9-953f-fc2cf3baee89","Type":"ContainerStarted","Data":"8dfe9432e0a3f3a5fcfa99b3016481d77d604ec3574ab04670586251b9e6233d"} Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.521295 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-64l6z" event={"ID":"e91a5139-5537-4578-ab4f-67d52927afa9","Type":"ContainerStarted","Data":"1f543abb49aaf470b43131d8884c14c08720cf8fb15c21caa315c8791607b026"} Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.527594 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"760b010c-9b6e-4ed6-8ae0-9af72816c192","Type":"ContainerStarted","Data":"18f10b0cf95084ff0f77ee2faf230fae11346422a9a664379ac4cc52f31d1209"} Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.531549 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac333e19-4263-460f-8fe5-d950677ef64f","Type":"ContainerStarted","Data":"bff58d4173744054a45b690f7021750cf32fe87700ffee14cd9c2456b9349762"} Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.542027 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-htrf9" podStartSLOduration=3.517502146 podStartE2EDuration="24.542006558s" podCreationTimestamp="2026-01-05 22:12:04 +0000 UTC" firstStartedPulling="2026-01-05 22:12:06.427341416 +0000 UTC m=+1218.799340855" lastFinishedPulling="2026-01-05 22:12:27.451845828 +0000 UTC m=+1239.823845267" observedRunningTime="2026-01-05 22:12:28.537772348 +0000 UTC m=+1240.909771787" watchObservedRunningTime="2026-01-05 22:12:28.542006558 +0000 UTC m=+1240.914005987" Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.546484 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zb7v6" event={"ID":"29d670c8-6fba-43a1-a8e8-9bca9742792d","Type":"ContainerStarted","Data":"dc5aa9ed739ff271d86828c2e7dee17c602d4a46a4953c1b82d2563653b1a4e5"} Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.546591 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zb7v6" event={"ID":"29d670c8-6fba-43a1-a8e8-9bca9742792d","Type":"ContainerStarted","Data":"4e316fb0106d6f926c84bf4dc1c9f9f8a5f5c61979d3433aa9e42d18e4131a2f"} Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.546521 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.546601 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dkbzg" Jan 05 22:12:28 crc kubenswrapper[5034]: E0105 22:12:28.550798 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-sh52t" podUID="93443f38-a401-43ed-8ba6-7e0ebef66eb5" Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.569149 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-64l6z" podStartSLOduration=3.344003991 podStartE2EDuration="24.569119026s" podCreationTimestamp="2026-01-05 22:12:04 +0000 UTC" firstStartedPulling="2026-01-05 22:12:06.197172606 +0000 UTC m=+1218.569172045" lastFinishedPulling="2026-01-05 22:12:27.422287631 +0000 UTC m=+1239.794287080" observedRunningTime="2026-01-05 22:12:28.554718168 +0000 UTC m=+1240.926717607" watchObservedRunningTime="2026-01-05 22:12:28.569119026 +0000 UTC m=+1240.941118485" Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.625147 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zb7v6" podStartSLOduration=14.625115772000001 podStartE2EDuration="14.625115772s" podCreationTimestamp="2026-01-05 22:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:28.608970065 +0000 UTC m=+1240.980969504" watchObservedRunningTime="2026-01-05 22:12:28.625115772 +0000 UTC m=+1240.997115211" Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.644677 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-p4hrh"] Jan 05 22:12:28 crc kubenswrapper[5034]: I0105 22:12:28.653840 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-p4hrh"] Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.087404 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8qk8w"] Jan 05 22:12:29 crc kubenswrapper[5034]: E0105 22:12:29.088531 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerName="init" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.088543 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerName="init" Jan 05 22:12:29 crc kubenswrapper[5034]: E0105 22:12:29.088564 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerName="dnsmasq-dns" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.088573 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerName="dnsmasq-dns" Jan 05 22:12:29 crc kubenswrapper[5034]: E0105 22:12:29.088595 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ab947b-542c-4bd4-a4e9-493332d7caf5" containerName="neutron-db-sync" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.088601 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ab947b-542c-4bd4-a4e9-493332d7caf5" containerName="neutron-db-sync" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.088780 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerName="dnsmasq-dns" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.088789 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ab947b-542c-4bd4-a4e9-493332d7caf5" containerName="neutron-db-sync" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.097015 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.143197 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8qk8w"] Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.160997 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.161185 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.161321 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.161374 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsj8v\" (UniqueName: \"kubernetes.io/projected/437dcc56-7197-479b-bcf2-747fdad8b85a-kube-api-access-zsj8v\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.161449 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.161530 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-config\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.202035 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8d6946db8-g2jm7"] Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.208577 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.212620 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.213185 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.213570 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.213723 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-77mmv" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.247712 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8d6946db8-g2jm7"] Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.264922 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.264986 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.265009 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsj8v\" (UniqueName: \"kubernetes.io/projected/437dcc56-7197-479b-bcf2-747fdad8b85a-kube-api-access-zsj8v\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.265037 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.265067 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-config\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.266060 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.266768 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.267310 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.267432 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.268111 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.268466 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-config\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.293498 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsj8v\" (UniqueName: \"kubernetes.io/projected/437dcc56-7197-479b-bcf2-747fdad8b85a-kube-api-access-zsj8v\") pod \"dnsmasq-dns-6b9c8b59c-8qk8w\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.367972 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-httpd-config\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.368416 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-combined-ca-bundle\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.368445 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-config\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.368504 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk2gp\" (UniqueName: \"kubernetes.io/projected/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-kube-api-access-pk2gp\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.368549 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-ovndb-tls-certs\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.471442 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-combined-ca-bundle\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.471512 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-config\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.471604 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk2gp\" (UniqueName: \"kubernetes.io/projected/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-kube-api-access-pk2gp\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.471688 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-ovndb-tls-certs\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.471757 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-httpd-config\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.477138 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-httpd-config\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.477605 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.481841 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-ovndb-tls-certs\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.487636 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-config\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.488679 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-combined-ca-bundle\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.499555 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk2gp\" (UniqueName: \"kubernetes.io/projected/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-kube-api-access-pk2gp\") pod \"neutron-8d6946db8-g2jm7\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.553151 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.620557 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"760b010c-9b6e-4ed6-8ae0-9af72816c192","Type":"ContainerStarted","Data":"a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0"} Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.628584 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ca0033b-cb51-4e99-83f5-da165dbaf071","Type":"ContainerStarted","Data":"08b481be8f4d588f1f224933b3fa92e6285f6fec55fd3d6fdd899a1c595c15c7"} Jan 05 22:12:29 crc kubenswrapper[5034]: I0105 22:12:29.874390 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994faa91-11c3-465e-9d3f-2bdbf2b84328" path="/var/lib/kubelet/pods/994faa91-11c3-465e-9d3f-2bdbf2b84328/volumes" Jan 05 22:12:30 crc kubenswrapper[5034]: I0105 22:12:30.647753 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ca0033b-cb51-4e99-83f5-da165dbaf071","Type":"ContainerStarted","Data":"cbb1499179ed3cc68b798ca3c488b445cca4d8762d63d3c225b79659d6b899c1"} Jan 05 22:12:30 crc kubenswrapper[5034]: I0105 22:12:30.651131 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"760b010c-9b6e-4ed6-8ae0-9af72816c192","Type":"ContainerStarted","Data":"a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447"} Jan 05 22:12:30 crc kubenswrapper[5034]: I0105 22:12:30.673357 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.67333725 podStartE2EDuration="11.67333725s" podCreationTimestamp="2026-01-05 22:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:30.670658074 +0000 UTC m=+1243.042657533" watchObservedRunningTime="2026-01-05 22:12:30.67333725 +0000 UTC m=+1243.045336689" Jan 05 22:12:30 crc kubenswrapper[5034]: I0105 22:12:30.706256 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.706221502 podStartE2EDuration="11.706221502s" podCreationTimestamp="2026-01-05 22:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:30.699653876 +0000 UTC m=+1243.071653315" watchObservedRunningTime="2026-01-05 22:12:30.706221502 +0000 UTC m=+1243.078220941" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.300530 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8qk8w"] Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.432680 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c458b9699-9b8w4"] Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.435456 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.440193 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.440319 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.452833 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c458b9699-9b8w4"] Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.512406 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8d6946db8-g2jm7"] Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.540008 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-httpd-config\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.540102 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-internal-tls-certs\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.540168 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-combined-ca-bundle\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.540225 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-config\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.540254 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-ovndb-tls-certs\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.540279 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9fvt\" (UniqueName: \"kubernetes.io/projected/5b457464-69a5-4e13-88a9-9e23250402d1-kube-api-access-z9fvt\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.540743 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-public-tls-certs\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.642773 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-config\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.642827 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-ovndb-tls-certs\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.642861 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9fvt\" (UniqueName: \"kubernetes.io/projected/5b457464-69a5-4e13-88a9-9e23250402d1-kube-api-access-z9fvt\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.642939 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-public-tls-certs\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.642974 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-httpd-config\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.643001 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-internal-tls-certs\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.643037 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-combined-ca-bundle\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.651216 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-ovndb-tls-certs\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.651390 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-httpd-config\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.652735 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-public-tls-certs\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.654551 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-combined-ca-bundle\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.655356 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-internal-tls-certs\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.658540 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-config\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.668168 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9fvt\" (UniqueName: \"kubernetes.io/projected/5b457464-69a5-4e13-88a9-9e23250402d1-kube-api-access-z9fvt\") pod \"neutron-6c458b9699-9b8w4\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.676010 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" event={"ID":"437dcc56-7197-479b-bcf2-747fdad8b85a","Type":"ContainerStarted","Data":"55be1eb42c37b2573335e58c1951f5284c2225b3c9d54424c5c2e52359938f95"} Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.676160 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" event={"ID":"437dcc56-7197-479b-bcf2-747fdad8b85a","Type":"ContainerStarted","Data":"24b2c1ae9b9e9dd1aefbad282e68c9e2c22e864878aab568cf6511ebe87d3a0b"} Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.679110 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d6946db8-g2jm7" event={"ID":"5ab942c4-0db8-41b2-87c2-5bfedd95c49a","Type":"ContainerStarted","Data":"4643281ecaea23e35654615d68c02f66dce02473a84c30efdd5e8d86be8fd0a7"} Jan 05 22:12:31 crc kubenswrapper[5034]: I0105 22:12:31.844370 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.228486 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79778dbd8c-p4hrh" podUID="994faa91-11c3-465e-9d3f-2bdbf2b84328" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.580195 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c458b9699-9b8w4"] Jan 05 22:12:32 crc kubenswrapper[5034]: W0105 22:12:32.610204 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b457464_69a5_4e13_88a9_9e23250402d1.slice/crio-cc37d5a7396e0bc234a7d7381ebe15ea3663d835e6596f174530e8996af4d4ec WatchSource:0}: Error finding container cc37d5a7396e0bc234a7d7381ebe15ea3663d835e6596f174530e8996af4d4ec: Status 404 returned error can't find the container with id cc37d5a7396e0bc234a7d7381ebe15ea3663d835e6596f174530e8996af4d4ec Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.693021 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d6946db8-g2jm7" event={"ID":"5ab942c4-0db8-41b2-87c2-5bfedd95c49a","Type":"ContainerStarted","Data":"325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f"} Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.693071 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d6946db8-g2jm7" event={"ID":"5ab942c4-0db8-41b2-87c2-5bfedd95c49a","Type":"ContainerStarted","Data":"0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba"} Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.693213 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.697709 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac333e19-4263-460f-8fe5-d950677ef64f","Type":"ContainerStarted","Data":"77f7407b3079cc0feea7e70b078274014c8ff4e2ca6b9213f64bc2e1da621b38"} Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.700326 5034 generic.go:334] "Generic (PLEG): container finished" podID="29d670c8-6fba-43a1-a8e8-9bca9742792d" containerID="dc5aa9ed739ff271d86828c2e7dee17c602d4a46a4953c1b82d2563653b1a4e5" exitCode=0 Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.700392 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zb7v6" event={"ID":"29d670c8-6fba-43a1-a8e8-9bca9742792d","Type":"ContainerDied","Data":"dc5aa9ed739ff271d86828c2e7dee17c602d4a46a4953c1b82d2563653b1a4e5"} Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.702196 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c458b9699-9b8w4" event={"ID":"5b457464-69a5-4e13-88a9-9e23250402d1","Type":"ContainerStarted","Data":"cc37d5a7396e0bc234a7d7381ebe15ea3663d835e6596f174530e8996af4d4ec"} Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.704604 5034 generic.go:334] "Generic (PLEG): container finished" podID="7ad742f5-9855-40e9-953f-fc2cf3baee89" containerID="8dfe9432e0a3f3a5fcfa99b3016481d77d604ec3574ab04670586251b9e6233d" exitCode=0 Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.704660 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htrf9" event={"ID":"7ad742f5-9855-40e9-953f-fc2cf3baee89","Type":"ContainerDied","Data":"8dfe9432e0a3f3a5fcfa99b3016481d77d604ec3574ab04670586251b9e6233d"} Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.728929 5034 generic.go:334] "Generic (PLEG): container finished" podID="437dcc56-7197-479b-bcf2-747fdad8b85a" containerID="55be1eb42c37b2573335e58c1951f5284c2225b3c9d54424c5c2e52359938f95" exitCode=0 Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.729018 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" event={"ID":"437dcc56-7197-479b-bcf2-747fdad8b85a","Type":"ContainerDied","Data":"55be1eb42c37b2573335e58c1951f5284c2225b3c9d54424c5c2e52359938f95"} Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.737704 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8d6946db8-g2jm7" podStartSLOduration=3.737682784 podStartE2EDuration="3.737682784s" podCreationTimestamp="2026-01-05 22:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:32.717957045 +0000 UTC m=+1245.089956484" watchObservedRunningTime="2026-01-05 22:12:32.737682784 +0000 UTC m=+1245.109682223" Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.743858 5034 generic.go:334] "Generic (PLEG): container finished" podID="e91a5139-5537-4578-ab4f-67d52927afa9" containerID="1f543abb49aaf470b43131d8884c14c08720cf8fb15c21caa315c8791607b026" exitCode=0 Jan 05 22:12:32 crc kubenswrapper[5034]: I0105 22:12:32.743907 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-64l6z" event={"ID":"e91a5139-5537-4578-ab4f-67d52927afa9","Type":"ContainerDied","Data":"1f543abb49aaf470b43131d8884c14c08720cf8fb15c21caa315c8791607b026"} Jan 05 22:12:33 crc kubenswrapper[5034]: I0105 22:12:33.809284 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" event={"ID":"437dcc56-7197-479b-bcf2-747fdad8b85a","Type":"ContainerStarted","Data":"4005af6d05c0008f2863e7bac1801f0fa804e65b89c2cccb889aee58c098d158"} Jan 05 22:12:33 crc kubenswrapper[5034]: I0105 22:12:33.811215 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:33 crc kubenswrapper[5034]: I0105 22:12:33.822829 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c458b9699-9b8w4" event={"ID":"5b457464-69a5-4e13-88a9-9e23250402d1","Type":"ContainerStarted","Data":"3b923196a4d918a3fdfb27750f013d3bb48b93297dee98bc255d7c448bb47281"} Jan 05 22:12:33 crc kubenswrapper[5034]: I0105 22:12:33.822872 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c458b9699-9b8w4" event={"ID":"5b457464-69a5-4e13-88a9-9e23250402d1","Type":"ContainerStarted","Data":"8a4ccd2cd507ddb6502cfdecb3eea7f0e3fcbcc526f6e6220ee67d322421fe39"} Jan 05 22:12:33 crc kubenswrapper[5034]: I0105 22:12:33.823582 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:12:33 crc kubenswrapper[5034]: I0105 22:12:33.836861 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" podStartSLOduration=4.836837258 podStartE2EDuration="4.836837258s" podCreationTimestamp="2026-01-05 22:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:33.831039564 +0000 UTC m=+1246.203039003" watchObservedRunningTime="2026-01-05 22:12:33.836837258 +0000 UTC m=+1246.208836697" Jan 05 22:12:33 crc kubenswrapper[5034]: I0105 22:12:33.880439 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c458b9699-9b8w4" podStartSLOduration=2.8804050820000002 podStartE2EDuration="2.880405082s" podCreationTimestamp="2026-01-05 22:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:33.855002123 +0000 UTC m=+1246.227001572" watchObservedRunningTime="2026-01-05 22:12:33.880405082 +0000 UTC m=+1246.252404521" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.514395 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.531371 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.547714 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.628617 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-combined-ca-bundle\") pod \"e91a5139-5537-4578-ab4f-67d52927afa9\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.629193 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-credential-keys\") pod \"29d670c8-6fba-43a1-a8e8-9bca9742792d\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.629372 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wqtj\" (UniqueName: \"kubernetes.io/projected/29d670c8-6fba-43a1-a8e8-9bca9742792d-kube-api-access-5wqtj\") pod \"29d670c8-6fba-43a1-a8e8-9bca9742792d\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.629457 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-scripts\") pod \"7ad742f5-9855-40e9-953f-fc2cf3baee89\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.629832 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-combined-ca-bundle\") pod \"7ad742f5-9855-40e9-953f-fc2cf3baee89\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.629988 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-scripts\") pod \"29d670c8-6fba-43a1-a8e8-9bca9742792d\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.630377 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tldnv\" (UniqueName: \"kubernetes.io/projected/e91a5139-5537-4578-ab4f-67d52927afa9-kube-api-access-tldnv\") pod \"e91a5139-5537-4578-ab4f-67d52927afa9\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.630586 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-combined-ca-bundle\") pod \"29d670c8-6fba-43a1-a8e8-9bca9742792d\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.630771 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-db-sync-config-data\") pod \"e91a5139-5537-4578-ab4f-67d52927afa9\" (UID: \"e91a5139-5537-4578-ab4f-67d52927afa9\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.630951 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad742f5-9855-40e9-953f-fc2cf3baee89-logs\") pod \"7ad742f5-9855-40e9-953f-fc2cf3baee89\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.631240 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-fernet-keys\") pod \"29d670c8-6fba-43a1-a8e8-9bca9742792d\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.631445 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-config-data\") pod \"29d670c8-6fba-43a1-a8e8-9bca9742792d\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.631691 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-config-data\") pod \"7ad742f5-9855-40e9-953f-fc2cf3baee89\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.631876 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4nsm\" (UniqueName: \"kubernetes.io/projected/7ad742f5-9855-40e9-953f-fc2cf3baee89-kube-api-access-d4nsm\") pod \"7ad742f5-9855-40e9-953f-fc2cf3baee89\" (UID: \"7ad742f5-9855-40e9-953f-fc2cf3baee89\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.642959 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad742f5-9855-40e9-953f-fc2cf3baee89-logs" (OuterVolumeSpecName: "logs") pod "7ad742f5-9855-40e9-953f-fc2cf3baee89" (UID: "7ad742f5-9855-40e9-953f-fc2cf3baee89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.653692 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91a5139-5537-4578-ab4f-67d52927afa9-kube-api-access-tldnv" (OuterVolumeSpecName: "kube-api-access-tldnv") pod "e91a5139-5537-4578-ab4f-67d52927afa9" (UID: "e91a5139-5537-4578-ab4f-67d52927afa9"). InnerVolumeSpecName "kube-api-access-tldnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.657257 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-scripts" (OuterVolumeSpecName: "scripts") pod "7ad742f5-9855-40e9-953f-fc2cf3baee89" (UID: "7ad742f5-9855-40e9-953f-fc2cf3baee89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.657286 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "29d670c8-6fba-43a1-a8e8-9bca9742792d" (UID: "29d670c8-6fba-43a1-a8e8-9bca9742792d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.657296 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d670c8-6fba-43a1-a8e8-9bca9742792d-kube-api-access-5wqtj" (OuterVolumeSpecName: "kube-api-access-5wqtj") pod "29d670c8-6fba-43a1-a8e8-9bca9742792d" (UID: "29d670c8-6fba-43a1-a8e8-9bca9742792d"). InnerVolumeSpecName "kube-api-access-5wqtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.660724 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad742f5-9855-40e9-953f-fc2cf3baee89-kube-api-access-d4nsm" (OuterVolumeSpecName: "kube-api-access-d4nsm") pod "7ad742f5-9855-40e9-953f-fc2cf3baee89" (UID: "7ad742f5-9855-40e9-953f-fc2cf3baee89"). InnerVolumeSpecName "kube-api-access-d4nsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.678377 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e91a5139-5537-4578-ab4f-67d52927afa9" (UID: "e91a5139-5537-4578-ab4f-67d52927afa9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.680861 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "29d670c8-6fba-43a1-a8e8-9bca9742792d" (UID: "29d670c8-6fba-43a1-a8e8-9bca9742792d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.683223 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-scripts" (OuterVolumeSpecName: "scripts") pod "29d670c8-6fba-43a1-a8e8-9bca9742792d" (UID: "29d670c8-6fba-43a1-a8e8-9bca9742792d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.707228 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ad742f5-9855-40e9-953f-fc2cf3baee89" (UID: "7ad742f5-9855-40e9-953f-fc2cf3baee89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.710203 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e91a5139-5537-4578-ab4f-67d52927afa9" (UID: "e91a5139-5537-4578-ab4f-67d52927afa9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.731341 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-config-data" (OuterVolumeSpecName: "config-data") pod "29d670c8-6fba-43a1-a8e8-9bca9742792d" (UID: "29d670c8-6fba-43a1-a8e8-9bca9742792d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.734253 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29d670c8-6fba-43a1-a8e8-9bca9742792d" (UID: "29d670c8-6fba-43a1-a8e8-9bca9742792d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736031 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-combined-ca-bundle\") pod \"29d670c8-6fba-43a1-a8e8-9bca9742792d\" (UID: \"29d670c8-6fba-43a1-a8e8-9bca9742792d\") " Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736702 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736722 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736735 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tldnv\" (UniqueName: \"kubernetes.io/projected/e91a5139-5537-4578-ab4f-67d52927afa9-kube-api-access-tldnv\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736747 5034 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736758 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad742f5-9855-40e9-953f-fc2cf3baee89-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736766 5034 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736774 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736782 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4nsm\" (UniqueName: \"kubernetes.io/projected/7ad742f5-9855-40e9-953f-fc2cf3baee89-kube-api-access-d4nsm\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736791 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a5139-5537-4578-ab4f-67d52927afa9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736799 5034 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736846 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wqtj\" (UniqueName: \"kubernetes.io/projected/29d670c8-6fba-43a1-a8e8-9bca9742792d-kube-api-access-5wqtj\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.736855 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: W0105 22:12:34.737369 5034 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/29d670c8-6fba-43a1-a8e8-9bca9742792d/volumes/kubernetes.io~secret/combined-ca-bundle Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.737411 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29d670c8-6fba-43a1-a8e8-9bca9742792d" (UID: "29d670c8-6fba-43a1-a8e8-9bca9742792d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.747187 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-config-data" (OuterVolumeSpecName: "config-data") pod "7ad742f5-9855-40e9-953f-fc2cf3baee89" (UID: "7ad742f5-9855-40e9-953f-fc2cf3baee89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.856460 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad742f5-9855-40e9-953f-fc2cf3baee89-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.856510 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d670c8-6fba-43a1-a8e8-9bca9742792d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.860940 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htrf9" event={"ID":"7ad742f5-9855-40e9-953f-fc2cf3baee89","Type":"ContainerDied","Data":"393c6db0c42826d3169e3fcb512818896bf5358f956d37a41209def900461d2e"} Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.860994 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="393c6db0c42826d3169e3fcb512818896bf5358f956d37a41209def900461d2e" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.861111 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htrf9" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.879615 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-64l6z" event={"ID":"e91a5139-5537-4578-ab4f-67d52927afa9","Type":"ContainerDied","Data":"bcd903d0aae433a1183f358671e8e3e57ee2ca2235e26f2aa070d19187149f7e"} Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.879688 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcd903d0aae433a1183f358671e8e3e57ee2ca2235e26f2aa070d19187149f7e" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.879810 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-64l6z" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.885549 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zb7v6" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.885728 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zb7v6" event={"ID":"29d670c8-6fba-43a1-a8e8-9bca9742792d","Type":"ContainerDied","Data":"4e316fb0106d6f926c84bf4dc1c9f9f8a5f5c61979d3433aa9e42d18e4131a2f"} Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.885815 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e316fb0106d6f926c84bf4dc1c9f9f8a5f5c61979d3433aa9e42d18e4131a2f" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.945116 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7c695fbb7-pzj94"] Jan 05 22:12:34 crc kubenswrapper[5034]: E0105 22:12:34.945679 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91a5139-5537-4578-ab4f-67d52927afa9" containerName="barbican-db-sync" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.945706 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91a5139-5537-4578-ab4f-67d52927afa9" containerName="barbican-db-sync" Jan 05 22:12:34 crc kubenswrapper[5034]: E0105 22:12:34.945726 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad742f5-9855-40e9-953f-fc2cf3baee89" containerName="placement-db-sync" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.945734 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad742f5-9855-40e9-953f-fc2cf3baee89" containerName="placement-db-sync" Jan 05 22:12:34 crc kubenswrapper[5034]: E0105 22:12:34.945747 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d670c8-6fba-43a1-a8e8-9bca9742792d" containerName="keystone-bootstrap" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.945755 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d670c8-6fba-43a1-a8e8-9bca9742792d" containerName="keystone-bootstrap" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.945957 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d670c8-6fba-43a1-a8e8-9bca9742792d" containerName="keystone-bootstrap" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.945985 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e91a5139-5537-4578-ab4f-67d52927afa9" containerName="barbican-db-sync" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.946013 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad742f5-9855-40e9-953f-fc2cf3baee89" containerName="placement-db-sync" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.946765 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.951635 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.951850 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.951944 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-62wxh" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.952097 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.952319 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.952347 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 22:12:34 crc kubenswrapper[5034]: I0105 22:12:34.968666 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c695fbb7-pzj94"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.008148 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-664f75f5b6-lz6hv"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.017579 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.023370 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.023578 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.023693 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.023867 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8nx2b" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.024067 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.063559 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-664f75f5b6-lz6hv"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.065197 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-fernet-keys\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.065314 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-scripts\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.065417 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-config-data\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.065510 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-combined-ca-bundle\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.065661 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qlj5\" (UniqueName: \"kubernetes.io/projected/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-kube-api-access-5qlj5\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.065808 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-credential-keys\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.065897 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-internal-tls-certs\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.065993 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-public-tls-certs\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180440 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-scripts\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180521 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-config-data\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180572 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-combined-ca-bundle\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180593 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qlj5\" (UniqueName: \"kubernetes.io/projected/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-kube-api-access-5qlj5\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180661 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-credential-keys\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180691 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-internal-tls-certs\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180734 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-public-tls-certs\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180754 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-config-data\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180790 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-internal-tls-certs\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180811 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11dc2db-1f91-4ec6-9efd-333fcafface4-logs\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180857 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjr82\" (UniqueName: \"kubernetes.io/projected/d11dc2db-1f91-4ec6-9efd-333fcafface4-kube-api-access-cjr82\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180879 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-combined-ca-bundle\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180942 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-fernet-keys\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180962 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-public-tls-certs\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.180994 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-scripts\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.186545 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-public-tls-certs\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.200366 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-internal-tls-certs\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.202041 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-679959649b-bksnm"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.203092 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-config-data\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.214631 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-scripts\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.224072 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.225634 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-combined-ca-bundle\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.226524 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-fernet-keys\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.230065 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.232000 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.232265 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xdttp" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.247803 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-credential-keys\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.251494 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6d6dccdcd5-gglfm"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.257929 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.260219 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qlj5\" (UniqueName: \"kubernetes.io/projected/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-kube-api-access-5qlj5\") pod \"keystone-7c695fbb7-pzj94\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.269976 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-679959649b-bksnm"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.280472 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.285886 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d6dccdcd5-gglfm"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.289236 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-internal-tls-certs\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.289285 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11dc2db-1f91-4ec6-9efd-333fcafface4-logs\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.289335 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjr82\" (UniqueName: \"kubernetes.io/projected/d11dc2db-1f91-4ec6-9efd-333fcafface4-kube-api-access-cjr82\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.289361 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-combined-ca-bundle\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.289421 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-public-tls-certs\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.289470 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-scripts\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.289555 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-config-data\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.291067 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11dc2db-1f91-4ec6-9efd-333fcafface4-logs\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.298042 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-internal-tls-certs\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.298668 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-public-tls-certs\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.298709 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8qk8w"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.301387 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-config-data\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.302994 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-scripts\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.306809 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-combined-ca-bundle\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.315718 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-w8l48"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.325744 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjr82\" (UniqueName: \"kubernetes.io/projected/d11dc2db-1f91-4ec6-9efd-333fcafface4-kube-api-access-cjr82\") pod \"placement-664f75f5b6-lz6hv\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.332969 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-w8l48"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.333166 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.397318 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwjj4\" (UniqueName: \"kubernetes.io/projected/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-kube-api-access-lwjj4\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.397589 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-logs\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.397623 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kttbs\" (UniqueName: \"kubernetes.io/projected/e86527c2-480f-4508-be25-9b2eab1f4274-kube-api-access-kttbs\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.397676 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.397701 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-combined-ca-bundle\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.397728 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data-custom\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.397750 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e86527c2-480f-4508-be25-9b2eab1f4274-logs\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.397792 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.397879 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-combined-ca-bundle\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.397910 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data-custom\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.428172 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55b7fd45b4-gmqkw"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.430475 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.432815 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.469236 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55b7fd45b4-gmqkw"] Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.513758 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.513829 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-config\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.513866 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwjj4\" (UniqueName: \"kubernetes.io/projected/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-kube-api-access-lwjj4\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.513911 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.513933 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzgtm\" (UniqueName: \"kubernetes.io/projected/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-kube-api-access-lzgtm\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.513980 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-logs\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.514001 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kttbs\" (UniqueName: \"kubernetes.io/projected/e86527c2-480f-4508-be25-9b2eab1f4274-kube-api-access-kttbs\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.514019 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.514057 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.514077 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-combined-ca-bundle\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.514116 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data-custom\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.514137 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e86527c2-480f-4508-be25-9b2eab1f4274-logs\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.514163 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.514195 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.514231 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-combined-ca-bundle\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.514250 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data-custom\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.518753 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data-custom\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.519369 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-logs\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.519736 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.521194 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e86527c2-480f-4508-be25-9b2eab1f4274-logs\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.540947 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-combined-ca-bundle\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.540964 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-combined-ca-bundle\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.545149 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.549965 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwjj4\" (UniqueName: \"kubernetes.io/projected/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-kube-api-access-lwjj4\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.556741 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.571989 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data-custom\") pod \"barbican-keystone-listener-679959649b-bksnm\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.580010 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kttbs\" (UniqueName: \"kubernetes.io/projected/e86527c2-480f-4508-be25-9b2eab1f4274-kube-api-access-kttbs\") pod \"barbican-worker-6d6dccdcd5-gglfm\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616251 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-config\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616365 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616396 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-combined-ca-bundle\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616418 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzgtm\" (UniqueName: \"kubernetes.io/projected/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-kube-api-access-lzgtm\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616434 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqrpx\" (UniqueName: \"kubernetes.io/projected/f44d5480-b711-4d12-b8df-55cb25360488-kube-api-access-hqrpx\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616477 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616537 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616569 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44d5480-b711-4d12-b8df-55cb25360488-logs\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616619 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616650 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data-custom\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.616680 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.617662 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.619119 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.619328 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.619750 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-config\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.620366 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.689010 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzgtm\" (UniqueName: \"kubernetes.io/projected/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-kube-api-access-lzgtm\") pod \"dnsmasq-dns-7bdf86f46f-w8l48\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.719217 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44d5480-b711-4d12-b8df-55cb25360488-logs\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.719324 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data-custom\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.719399 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-combined-ca-bundle\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.719420 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqrpx\" (UniqueName: \"kubernetes.io/projected/f44d5480-b711-4d12-b8df-55cb25360488-kube-api-access-hqrpx\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.719466 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.747921 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.762919 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.776097 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.792931 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.853742 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44d5480-b711-4d12-b8df-55cb25360488-logs\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.858027 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqrpx\" (UniqueName: \"kubernetes.io/projected/f44d5480-b711-4d12-b8df-55cb25360488-kube-api-access-hqrpx\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.858687 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.863742 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data-custom\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:35 crc kubenswrapper[5034]: I0105 22:12:35.876107 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-combined-ca-bundle\") pod \"barbican-api-55b7fd45b4-gmqkw\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:36 crc kubenswrapper[5034]: I0105 22:12:36.155552 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:36 crc kubenswrapper[5034]: I0105 22:12:36.907919 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" podUID="437dcc56-7197-479b-bcf2-747fdad8b85a" containerName="dnsmasq-dns" containerID="cri-o://4005af6d05c0008f2863e7bac1801f0fa804e65b89c2cccb889aee58c098d158" gracePeriod=10 Jan 05 22:12:37 crc kubenswrapper[5034]: I0105 22:12:37.919486 5034 generic.go:334] "Generic (PLEG): container finished" podID="437dcc56-7197-479b-bcf2-747fdad8b85a" containerID="4005af6d05c0008f2863e7bac1801f0fa804e65b89c2cccb889aee58c098d158" exitCode=0 Jan 05 22:12:37 crc kubenswrapper[5034]: I0105 22:12:37.919507 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" event={"ID":"437dcc56-7197-479b-bcf2-747fdad8b85a","Type":"ContainerDied","Data":"4005af6d05c0008f2863e7bac1801f0fa804e65b89c2cccb889aee58c098d158"} Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.776408 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b9b4698bd-747dm"] Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.778916 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.781812 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.782066 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.791695 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b9b4698bd-747dm"] Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.898030 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jz5t\" (UniqueName: \"kubernetes.io/projected/6c0c6abd-9d45-4022-aca3-5e63949d1aab-kube-api-access-9jz5t\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.898152 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-combined-ca-bundle\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.898222 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data-custom\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.898245 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-internal-tls-certs\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.898393 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.898652 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0c6abd-9d45-4022-aca3-5e63949d1aab-logs\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:38 crc kubenswrapper[5034]: I0105 22:12:38.898681 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-public-tls-certs\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.000203 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jz5t\" (UniqueName: \"kubernetes.io/projected/6c0c6abd-9d45-4022-aca3-5e63949d1aab-kube-api-access-9jz5t\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.000315 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-combined-ca-bundle\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.000381 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data-custom\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.000404 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-internal-tls-certs\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.000486 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.000545 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0c6abd-9d45-4022-aca3-5e63949d1aab-logs\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.000584 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-public-tls-certs\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.003673 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0c6abd-9d45-4022-aca3-5e63949d1aab-logs\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.008361 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data-custom\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.008515 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-public-tls-certs\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.008677 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-internal-tls-certs\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.009543 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.020929 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-combined-ca-bundle\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.025092 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jz5t\" (UniqueName: \"kubernetes.io/projected/6c0c6abd-9d45-4022-aca3-5e63949d1aab-kube-api-access-9jz5t\") pod \"barbican-api-6b9b4698bd-747dm\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.103145 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.867312 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.868093 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.885094 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.885371 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.906948 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.916531 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.925989 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.950261 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.957838 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" event={"ID":"437dcc56-7197-479b-bcf2-747fdad8b85a","Type":"ContainerDied","Data":"24b2c1ae9b9e9dd1aefbad282e68c9e2c22e864878aab568cf6511ebe87d3a0b"} Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.957892 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24b2c1ae9b9e9dd1aefbad282e68c9e2c22e864878aab568cf6511ebe87d3a0b" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.958612 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.958641 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.958653 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.958663 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 22:12:39 crc kubenswrapper[5034]: I0105 22:12:39.978358 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.131769 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-swift-storage-0\") pod \"437dcc56-7197-479b-bcf2-747fdad8b85a\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.132564 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-sb\") pod \"437dcc56-7197-479b-bcf2-747fdad8b85a\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.132771 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsj8v\" (UniqueName: \"kubernetes.io/projected/437dcc56-7197-479b-bcf2-747fdad8b85a-kube-api-access-zsj8v\") pod \"437dcc56-7197-479b-bcf2-747fdad8b85a\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.132877 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-svc\") pod \"437dcc56-7197-479b-bcf2-747fdad8b85a\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.133059 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-nb\") pod \"437dcc56-7197-479b-bcf2-747fdad8b85a\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.133118 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-config\") pod \"437dcc56-7197-479b-bcf2-747fdad8b85a\" (UID: \"437dcc56-7197-479b-bcf2-747fdad8b85a\") " Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.147503 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/437dcc56-7197-479b-bcf2-747fdad8b85a-kube-api-access-zsj8v" (OuterVolumeSpecName: "kube-api-access-zsj8v") pod "437dcc56-7197-479b-bcf2-747fdad8b85a" (UID: "437dcc56-7197-479b-bcf2-747fdad8b85a"). InnerVolumeSpecName "kube-api-access-zsj8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.254236 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsj8v\" (UniqueName: \"kubernetes.io/projected/437dcc56-7197-479b-bcf2-747fdad8b85a-kube-api-access-zsj8v\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.299893 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "437dcc56-7197-479b-bcf2-747fdad8b85a" (UID: "437dcc56-7197-479b-bcf2-747fdad8b85a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.326981 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "437dcc56-7197-479b-bcf2-747fdad8b85a" (UID: "437dcc56-7197-479b-bcf2-747fdad8b85a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.330229 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "437dcc56-7197-479b-bcf2-747fdad8b85a" (UID: "437dcc56-7197-479b-bcf2-747fdad8b85a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.349007 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "437dcc56-7197-479b-bcf2-747fdad8b85a" (UID: "437dcc56-7197-479b-bcf2-747fdad8b85a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.357069 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.357169 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.357209 5034 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.357224 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.361900 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-config" (OuterVolumeSpecName: "config") pod "437dcc56-7197-479b-bcf2-747fdad8b85a" (UID: "437dcc56-7197-479b-bcf2-747fdad8b85a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.458868 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/437dcc56-7197-479b-bcf2-747fdad8b85a-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.919534 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b9b4698bd-747dm"] Jan 05 22:12:40 crc kubenswrapper[5034]: W0105 22:12:40.961470 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda85883d_cfc8_4e82_ad5d_f0889f79b7c3.slice/crio-3f72919cbdf44d5836aacffa2571ee7182b5aa4f2cd50f413521ea9e9b0e2b31 WatchSource:0}: Error finding container 3f72919cbdf44d5836aacffa2571ee7182b5aa4f2cd50f413521ea9e9b0e2b31: Status 404 returned error can't find the container with id 3f72919cbdf44d5836aacffa2571ee7182b5aa4f2cd50f413521ea9e9b0e2b31 Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.980331 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-679959649b-bksnm"] Jan 05 22:12:40 crc kubenswrapper[5034]: I0105 22:12:40.998292 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9b4698bd-747dm" event={"ID":"6c0c6abd-9d45-4022-aca3-5e63949d1aab","Type":"ContainerStarted","Data":"4b0c2f55e01d555c1150bee907fd245b8820378bbca1e93c3ce49ef7391741ea"} Jan 05 22:12:41 crc kubenswrapper[5034]: I0105 22:12:41.005515 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" Jan 05 22:12:41 crc kubenswrapper[5034]: I0105 22:12:41.008150 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-679959649b-bksnm" event={"ID":"ff813b46-2db4-46af-ad1b-3e84fcb8e33b","Type":"ContainerStarted","Data":"525589c0f54e7d0cf5eb3db05a066b702d75de6747488a1105c0ac92a8bf0343"} Jan 05 22:12:41 crc kubenswrapper[5034]: I0105 22:12:41.042827 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c695fbb7-pzj94"] Jan 05 22:12:41 crc kubenswrapper[5034]: I0105 22:12:41.053603 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-664f75f5b6-lz6hv"] Jan 05 22:12:41 crc kubenswrapper[5034]: I0105 22:12:41.078128 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8qk8w"] Jan 05 22:12:41 crc kubenswrapper[5034]: I0105 22:12:41.088844 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-8qk8w"] Jan 05 22:12:41 crc kubenswrapper[5034]: I0105 22:12:41.163999 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-w8l48"] Jan 05 22:12:41 crc kubenswrapper[5034]: I0105 22:12:41.250166 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d6dccdcd5-gglfm"] Jan 05 22:12:41 crc kubenswrapper[5034]: I0105 22:12:41.258335 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55b7fd45b4-gmqkw"] Jan 05 22:12:41 crc kubenswrapper[5034]: W0105 22:12:41.400274 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode86527c2_480f_4508_be25_9b2eab1f4274.slice/crio-67fe16ebfb09e58dc945e192dae33c37d2ed1ee4a74813851295f669c0b65b11 WatchSource:0}: Error finding container 67fe16ebfb09e58dc945e192dae33c37d2ed1ee4a74813851295f669c0b65b11: Status 404 returned error can't find the container with id 67fe16ebfb09e58dc945e192dae33c37d2ed1ee4a74813851295f669c0b65b11 Jan 05 22:12:41 crc kubenswrapper[5034]: W0105 22:12:41.403554 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf44d5480_b711_4d12_b8df_55cb25360488.slice/crio-90336de22dbf7168d6ad2d024a3733c7082cabe1af1575e30b6fd377f03d033e WatchSource:0}: Error finding container 90336de22dbf7168d6ad2d024a3733c7082cabe1af1575e30b6fd377f03d033e: Status 404 returned error can't find the container with id 90336de22dbf7168d6ad2d024a3733c7082cabe1af1575e30b6fd377f03d033e Jan 05 22:12:41 crc kubenswrapper[5034]: I0105 22:12:41.864974 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="437dcc56-7197-479b-bcf2-747fdad8b85a" path="/var/lib/kubelet/pods/437dcc56-7197-479b-bcf2-747fdad8b85a/volumes" Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.018799 5034 generic.go:334] "Generic (PLEG): container finished" podID="e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" containerID="dace7e39433a4fdf74bc06795a42ae68ec7df0a1e7b30dbb2b94b84c144a5a9c" exitCode=0 Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.019131 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" event={"ID":"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8","Type":"ContainerDied","Data":"dace7e39433a4fdf74bc06795a42ae68ec7df0a1e7b30dbb2b94b84c144a5a9c"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.019310 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" event={"ID":"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8","Type":"ContainerStarted","Data":"46a61e9707f7d32bf998e5c5a9c4db4627051f831fb2f893bedd0542848c1580"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.032728 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b7fd45b4-gmqkw" event={"ID":"f44d5480-b711-4d12-b8df-55cb25360488","Type":"ContainerStarted","Data":"afcfeb6f8a89e08926ddc001cf81cf22fbb18f8ffc85fad65108fcbad7375510"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.032817 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b7fd45b4-gmqkw" event={"ID":"f44d5480-b711-4d12-b8df-55cb25360488","Type":"ContainerStarted","Data":"90336de22dbf7168d6ad2d024a3733c7082cabe1af1575e30b6fd377f03d033e"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.038035 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" event={"ID":"e86527c2-480f-4508-be25-9b2eab1f4274","Type":"ContainerStarted","Data":"67fe16ebfb09e58dc945e192dae33c37d2ed1ee4a74813851295f669c0b65b11"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.041234 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c695fbb7-pzj94" event={"ID":"da85883d-cfc8-4e82-ad5d-f0889f79b7c3","Type":"ContainerStarted","Data":"939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.041263 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c695fbb7-pzj94" event={"ID":"da85883d-cfc8-4e82-ad5d-f0889f79b7c3","Type":"ContainerStarted","Data":"3f72919cbdf44d5836aacffa2571ee7182b5aa4f2cd50f413521ea9e9b0e2b31"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.044196 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.061593 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-664f75f5b6-lz6hv" event={"ID":"d11dc2db-1f91-4ec6-9efd-333fcafface4","Type":"ContainerStarted","Data":"08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.061895 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-664f75f5b6-lz6hv" event={"ID":"d11dc2db-1f91-4ec6-9efd-333fcafface4","Type":"ContainerStarted","Data":"e96c0b944bbe9235e4cb293e573ef000f7a81d3face410d936157396f7fcb4ba"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.077672 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac333e19-4263-460f-8fe5-d950677ef64f","Type":"ContainerStarted","Data":"99b7159e7efd2259210842400db47567ccc1de87c8306bb805a1cb95198ca5ab"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.087403 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7c695fbb7-pzj94" podStartSLOduration=8.08738086 podStartE2EDuration="8.08738086s" podCreationTimestamp="2026-01-05 22:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:42.075216185 +0000 UTC m=+1254.447215634" watchObservedRunningTime="2026-01-05 22:12:42.08738086 +0000 UTC m=+1254.459380299" Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.091217 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9b4698bd-747dm" event={"ID":"6c0c6abd-9d45-4022-aca3-5e63949d1aab","Type":"ContainerStarted","Data":"dc580afcb0d5964a6770e1805a08f6ab8ba168d592cf15a21e4431f5b1c61076"} Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.757768 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.758198 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.771456 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.892705 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.892801 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 22:12:42 crc kubenswrapper[5034]: I0105 22:12:42.893900 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.197118 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-664f75f5b6-lz6hv" event={"ID":"d11dc2db-1f91-4ec6-9efd-333fcafface4","Type":"ContainerStarted","Data":"0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2"} Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.198544 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.198574 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.208278 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9b4698bd-747dm" event={"ID":"6c0c6abd-9d45-4022-aca3-5e63949d1aab","Type":"ContainerStarted","Data":"6800fbb56148e87618fda2df370bdf264e113c0b015622bf898f8261f8fafda1"} Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.208499 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.235789 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" event={"ID":"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8","Type":"ContainerStarted","Data":"fa897c95f67821d169e5ece96f92320860ef62c35cb6a30998ee88f589988069"} Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.236829 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.299202 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-664f75f5b6-lz6hv" podStartSLOduration=9.299180925 podStartE2EDuration="9.299180925s" podCreationTimestamp="2026-01-05 22:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:43.243485287 +0000 UTC m=+1255.615484726" watchObservedRunningTime="2026-01-05 22:12:43.299180925 +0000 UTC m=+1255.671180364" Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.304253 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b7fd45b4-gmqkw" event={"ID":"f44d5480-b711-4d12-b8df-55cb25360488","Type":"ContainerStarted","Data":"f81f93bf4bcc59025be39bc83f701532657cba6298dc446886d5ccade7d655c6"} Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.304699 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.304754 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.317448 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b9b4698bd-747dm" podStartSLOduration=5.317425192 podStartE2EDuration="5.317425192s" podCreationTimestamp="2026-01-05 22:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:43.270524183 +0000 UTC m=+1255.642523622" watchObservedRunningTime="2026-01-05 22:12:43.317425192 +0000 UTC m=+1255.689424631" Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.348452 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" podStartSLOduration=8.348408209 podStartE2EDuration="8.348408209s" podCreationTimestamp="2026-01-05 22:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:43.306466271 +0000 UTC m=+1255.678465710" watchObservedRunningTime="2026-01-05 22:12:43.348408209 +0000 UTC m=+1255.720407648" Jan 05 22:12:43 crc kubenswrapper[5034]: I0105 22:12:43.410440 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55b7fd45b4-gmqkw" podStartSLOduration=8.410417806 podStartE2EDuration="8.410417806s" podCreationTimestamp="2026-01-05 22:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:43.351235219 +0000 UTC m=+1255.723234658" watchObservedRunningTime="2026-01-05 22:12:43.410417806 +0000 UTC m=+1255.782417255" Jan 05 22:12:44 crc kubenswrapper[5034]: I0105 22:12:44.104064 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:44 crc kubenswrapper[5034]: I0105 22:12:44.319594 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sh52t" event={"ID":"93443f38-a401-43ed-8ba6-7e0ebef66eb5","Type":"ContainerStarted","Data":"3f25f80763e2de95eb07d3e84abe961949c5c4da5f365db39eff6df0d608f658"} Jan 05 22:12:44 crc kubenswrapper[5034]: I0105 22:12:44.337500 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-sh52t" podStartSLOduration=4.31730351 podStartE2EDuration="40.337468655s" podCreationTimestamp="2026-01-05 22:12:04 +0000 UTC" firstStartedPulling="2026-01-05 22:12:06.122181712 +0000 UTC m=+1218.494181151" lastFinishedPulling="2026-01-05 22:12:42.142346857 +0000 UTC m=+1254.514346296" observedRunningTime="2026-01-05 22:12:44.336946261 +0000 UTC m=+1256.708945700" watchObservedRunningTime="2026-01-05 22:12:44.337468655 +0000 UTC m=+1256.709468104" Jan 05 22:12:44 crc kubenswrapper[5034]: I0105 22:12:44.478748 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b9c8b59c-8qk8w" podUID="437dcc56-7197-479b-bcf2-747fdad8b85a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: i/o timeout" Jan 05 22:12:45 crc kubenswrapper[5034]: I0105 22:12:45.333568 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-679959649b-bksnm" event={"ID":"ff813b46-2db4-46af-ad1b-3e84fcb8e33b","Type":"ContainerStarted","Data":"e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39"} Jan 05 22:12:45 crc kubenswrapper[5034]: I0105 22:12:45.335486 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" event={"ID":"e86527c2-480f-4508-be25-9b2eab1f4274","Type":"ContainerStarted","Data":"ed46a153785fa5cc88884b8676fd407f393552edec4eb2fef58f1e35704d646a"} Jan 05 22:12:46 crc kubenswrapper[5034]: I0105 22:12:46.351310 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-679959649b-bksnm" event={"ID":"ff813b46-2db4-46af-ad1b-3e84fcb8e33b","Type":"ContainerStarted","Data":"f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399"} Jan 05 22:12:46 crc kubenswrapper[5034]: I0105 22:12:46.354998 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" event={"ID":"e86527c2-480f-4508-be25-9b2eab1f4274","Type":"ContainerStarted","Data":"d6abfd2461105e8e1eea8e2d6a6889e3b27bf573f5c2e81d53d24675eaa17698"} Jan 05 22:12:46 crc kubenswrapper[5034]: I0105 22:12:46.394815 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-679959649b-bksnm" podStartSLOduration=7.496970871 podStartE2EDuration="11.394756449s" podCreationTimestamp="2026-01-05 22:12:35 +0000 UTC" firstStartedPulling="2026-01-05 22:12:40.968746354 +0000 UTC m=+1253.340745793" lastFinishedPulling="2026-01-05 22:12:44.866531932 +0000 UTC m=+1257.238531371" observedRunningTime="2026-01-05 22:12:46.371025727 +0000 UTC m=+1258.743025166" watchObservedRunningTime="2026-01-05 22:12:46.394756449 +0000 UTC m=+1258.766755888" Jan 05 22:12:46 crc kubenswrapper[5034]: I0105 22:12:46.402769 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" podStartSLOduration=7.997294134 podStartE2EDuration="11.402738515s" podCreationTimestamp="2026-01-05 22:12:35 +0000 UTC" firstStartedPulling="2026-01-05 22:12:41.463805667 +0000 UTC m=+1253.835805106" lastFinishedPulling="2026-01-05 22:12:44.869250048 +0000 UTC m=+1257.241249487" observedRunningTime="2026-01-05 22:12:46.397040894 +0000 UTC m=+1258.769040333" watchObservedRunningTime="2026-01-05 22:12:46.402738515 +0000 UTC m=+1258.774737954" Jan 05 22:12:48 crc kubenswrapper[5034]: I0105 22:12:48.319014 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:48 crc kubenswrapper[5034]: I0105 22:12:48.384425 5034 generic.go:334] "Generic (PLEG): container finished" podID="93443f38-a401-43ed-8ba6-7e0ebef66eb5" containerID="3f25f80763e2de95eb07d3e84abe961949c5c4da5f365db39eff6df0d608f658" exitCode=0 Jan 05 22:12:48 crc kubenswrapper[5034]: I0105 22:12:48.384835 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sh52t" event={"ID":"93443f38-a401-43ed-8ba6-7e0ebef66eb5","Type":"ContainerDied","Data":"3f25f80763e2de95eb07d3e84abe961949c5c4da5f365db39eff6df0d608f658"} Jan 05 22:12:49 crc kubenswrapper[5034]: I0105 22:12:49.996477 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.468912 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.468970 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.549457 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.632203 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-db-sync-config-data\") pod \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.632379 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-scripts\") pod \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.632500 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-config-data\") pod \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.632638 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-combined-ca-bundle\") pod \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.632736 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93443f38-a401-43ed-8ba6-7e0ebef66eb5-etc-machine-id\") pod \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.632951 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4kdt\" (UniqueName: \"kubernetes.io/projected/93443f38-a401-43ed-8ba6-7e0ebef66eb5-kube-api-access-v4kdt\") pod \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\" (UID: \"93443f38-a401-43ed-8ba6-7e0ebef66eb5\") " Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.634288 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93443f38-a401-43ed-8ba6-7e0ebef66eb5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "93443f38-a401-43ed-8ba6-7e0ebef66eb5" (UID: "93443f38-a401-43ed-8ba6-7e0ebef66eb5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.641578 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93443f38-a401-43ed-8ba6-7e0ebef66eb5-kube-api-access-v4kdt" (OuterVolumeSpecName: "kube-api-access-v4kdt") pod "93443f38-a401-43ed-8ba6-7e0ebef66eb5" (UID: "93443f38-a401-43ed-8ba6-7e0ebef66eb5"). InnerVolumeSpecName "kube-api-access-v4kdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.648517 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-scripts" (OuterVolumeSpecName: "scripts") pod "93443f38-a401-43ed-8ba6-7e0ebef66eb5" (UID: "93443f38-a401-43ed-8ba6-7e0ebef66eb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.669239 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "93443f38-a401-43ed-8ba6-7e0ebef66eb5" (UID: "93443f38-a401-43ed-8ba6-7e0ebef66eb5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.670901 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93443f38-a401-43ed-8ba6-7e0ebef66eb5" (UID: "93443f38-a401-43ed-8ba6-7e0ebef66eb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.694575 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-config-data" (OuterVolumeSpecName: "config-data") pod "93443f38-a401-43ed-8ba6-7e0ebef66eb5" (UID: "93443f38-a401-43ed-8ba6-7e0ebef66eb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.736253 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.736290 5034 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93443f38-a401-43ed-8ba6-7e0ebef66eb5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.736304 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4kdt\" (UniqueName: \"kubernetes.io/projected/93443f38-a401-43ed-8ba6-7e0ebef66eb5-kube-api-access-v4kdt\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.736320 5034 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.736332 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.736346 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93443f38-a401-43ed-8ba6-7e0ebef66eb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.779255 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.859096 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-8jvdk"] Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.859768 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" podUID="193104db-05bf-409d-88e5-17753e72f1b0" containerName="dnsmasq-dns" containerID="cri-o://9ef22f78caba23264c42b8edcef4887fa1515322e3b9576c99272cea78bc2bdb" gracePeriod=10 Jan 05 22:12:50 crc kubenswrapper[5034]: I0105 22:12:50.986859 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.087287 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.162004 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55b7fd45b4-gmqkw"] Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.165447 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55b7fd45b4-gmqkw" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api-log" containerID="cri-o://afcfeb6f8a89e08926ddc001cf81cf22fbb18f8ffc85fad65108fcbad7375510" gracePeriod=30 Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.166011 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55b7fd45b4-gmqkw" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api" containerID="cri-o://f81f93bf4bcc59025be39bc83f701532657cba6298dc446886d5ccade7d655c6" gracePeriod=30 Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.177219 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55b7fd45b4-gmqkw" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.177638 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55b7fd45b4-gmqkw" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.177736 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-55b7fd45b4-gmqkw" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.177826 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-55b7fd45b4-gmqkw" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.444686 5034 generic.go:334] "Generic (PLEG): container finished" podID="f44d5480-b711-4d12-b8df-55cb25360488" containerID="afcfeb6f8a89e08926ddc001cf81cf22fbb18f8ffc85fad65108fcbad7375510" exitCode=143 Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.444797 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b7fd45b4-gmqkw" event={"ID":"f44d5480-b711-4d12-b8df-55cb25360488","Type":"ContainerDied","Data":"afcfeb6f8a89e08926ddc001cf81cf22fbb18f8ffc85fad65108fcbad7375510"} Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.451878 5034 generic.go:334] "Generic (PLEG): container finished" podID="193104db-05bf-409d-88e5-17753e72f1b0" containerID="9ef22f78caba23264c42b8edcef4887fa1515322e3b9576c99272cea78bc2bdb" exitCode=0 Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.451963 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" event={"ID":"193104db-05bf-409d-88e5-17753e72f1b0","Type":"ContainerDied","Data":"9ef22f78caba23264c42b8edcef4887fa1515322e3b9576c99272cea78bc2bdb"} Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.464051 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sh52t" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.464984 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sh52t" event={"ID":"93443f38-a401-43ed-8ba6-7e0ebef66eb5","Type":"ContainerDied","Data":"c9950fd583d61effaa59c817d8fcfc3604023b86e234808b3855806573b5d225"} Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.465049 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9950fd583d61effaa59c817d8fcfc3604023b86e234808b3855806573b5d225" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.941587 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:12:51 crc kubenswrapper[5034]: E0105 22:12:51.942460 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437dcc56-7197-479b-bcf2-747fdad8b85a" containerName="dnsmasq-dns" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.942479 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="437dcc56-7197-479b-bcf2-747fdad8b85a" containerName="dnsmasq-dns" Jan 05 22:12:51 crc kubenswrapper[5034]: E0105 22:12:51.942503 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93443f38-a401-43ed-8ba6-7e0ebef66eb5" containerName="cinder-db-sync" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.942511 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="93443f38-a401-43ed-8ba6-7e0ebef66eb5" containerName="cinder-db-sync" Jan 05 22:12:51 crc kubenswrapper[5034]: E0105 22:12:51.942540 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437dcc56-7197-479b-bcf2-747fdad8b85a" containerName="init" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.942549 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="437dcc56-7197-479b-bcf2-747fdad8b85a" containerName="init" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.942812 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="437dcc56-7197-479b-bcf2-747fdad8b85a" containerName="dnsmasq-dns" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.942846 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="93443f38-a401-43ed-8ba6-7e0ebef66eb5" containerName="cinder-db-sync" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.944063 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.955844 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.956090 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tdccx" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.959184 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.959356 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.974509 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.981604 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.981681 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.981726 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.981783 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxftv\" (UniqueName: \"kubernetes.io/projected/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-kube-api-access-bxftv\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.981823 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:51 crc kubenswrapper[5034]: I0105 22:12:51.981848 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.021103 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-c5krh"] Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.023529 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.045899 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-c5krh"] Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.085906 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.085980 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.086068 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.086137 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.086177 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxftv\" (UniqueName: \"kubernetes.io/projected/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-kube-api-access-bxftv\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.086204 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.086234 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.086262 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcbhf\" (UniqueName: \"kubernetes.io/projected/1c240b88-16fc-469e-b12f-64f70d8cde97-kube-api-access-kcbhf\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.086281 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-config\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.086308 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.086349 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.086378 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.092982 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.093063 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.111066 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.111808 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.118719 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.118802 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxftv\" (UniqueName: \"kubernetes.io/projected/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-kube-api-access-bxftv\") pod \"cinder-scheduler-0\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.195818 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.195928 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.195998 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcbhf\" (UniqueName: \"kubernetes.io/projected/1c240b88-16fc-469e-b12f-64f70d8cde97-kube-api-access-kcbhf\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.196018 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-config\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.196036 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.196065 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.197410 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.197465 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.198097 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.198810 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.200844 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-config\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.224295 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcbhf\" (UniqueName: \"kubernetes.io/projected/1c240b88-16fc-469e-b12f-64f70d8cde97-kube-api-access-kcbhf\") pod \"dnsmasq-dns-75bfc9b94f-c5krh\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.275372 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.277614 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.282353 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.284146 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.295484 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.299766 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.299892 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmb6\" (UniqueName: \"kubernetes.io/projected/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-kube-api-access-2cmb6\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.299914 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.299993 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.300047 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-scripts\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.300192 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-logs\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.300391 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.379814 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.402038 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.402367 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-scripts\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.403640 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-logs\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.403845 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.404048 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.404228 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cmb6\" (UniqueName: \"kubernetes.io/projected/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-kube-api-access-2cmb6\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.404319 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.404514 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.405139 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-logs\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.408598 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-scripts\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.410508 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.411570 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.420623 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.425676 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cmb6\" (UniqueName: \"kubernetes.io/projected/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-kube-api-access-2cmb6\") pod \"cinder-api-0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " pod="openstack/cinder-api-0" Jan 05 22:12:52 crc kubenswrapper[5034]: I0105 22:12:52.610716 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.374820 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.467850 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-sb\") pod \"193104db-05bf-409d-88e5-17753e72f1b0\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.467906 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-config\") pod \"193104db-05bf-409d-88e5-17753e72f1b0\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.468097 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-svc\") pod \"193104db-05bf-409d-88e5-17753e72f1b0\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.468132 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5hx7\" (UniqueName: \"kubernetes.io/projected/193104db-05bf-409d-88e5-17753e72f1b0-kube-api-access-v5hx7\") pod \"193104db-05bf-409d-88e5-17753e72f1b0\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.468162 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-nb\") pod \"193104db-05bf-409d-88e5-17753e72f1b0\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.468191 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-swift-storage-0\") pod \"193104db-05bf-409d-88e5-17753e72f1b0\" (UID: \"193104db-05bf-409d-88e5-17753e72f1b0\") " Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.501623 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193104db-05bf-409d-88e5-17753e72f1b0-kube-api-access-v5hx7" (OuterVolumeSpecName: "kube-api-access-v5hx7") pod "193104db-05bf-409d-88e5-17753e72f1b0" (UID: "193104db-05bf-409d-88e5-17753e72f1b0"). InnerVolumeSpecName "kube-api-access-v5hx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.564928 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.568216 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" event={"ID":"193104db-05bf-409d-88e5-17753e72f1b0","Type":"ContainerDied","Data":"2fb915bb9c3c3a9f5d9b2648ccf3ea0b55f2339fadb2a40e3d4c528c09e731ea"} Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.568272 5034 scope.go:117] "RemoveContainer" containerID="9ef22f78caba23264c42b8edcef4887fa1515322e3b9576c99272cea78bc2bdb" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.568431 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-8jvdk" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.573839 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5hx7\" (UniqueName: \"kubernetes.io/projected/193104db-05bf-409d-88e5-17753e72f1b0-kube-api-access-v5hx7\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.611861 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5","Type":"ContainerStarted","Data":"67783f2aa9303bb0333f3de7bb17b91d4a3d69d40b0546c16e934616eba788aa"} Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.635272 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="ceilometer-central-agent" containerID="cri-o://bff58d4173744054a45b690f7021750cf32fe87700ffee14cd9c2456b9349762" gracePeriod=30 Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.635602 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.636032 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="proxy-httpd" containerID="cri-o://886d080e5ba90abd5e79ae3895420c010b8591b45e6fdc95c67306a31eca611b" gracePeriod=30 Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.636104 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="sg-core" containerID="cri-o://99b7159e7efd2259210842400db47567ccc1de87c8306bb805a1cb95198ca5ab" gracePeriod=30 Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.636149 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="ceilometer-notification-agent" containerID="cri-o://77f7407b3079cc0feea7e70b078274014c8ff4e2ca6b9213f64bc2e1da621b38" gracePeriod=30 Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.718854 5034 scope.go:117] "RemoveContainer" containerID="95edfa9c69e2135b21b9ca62069ec17eadd79ca4c8dea12255126300eb4ff795" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.756070 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "193104db-05bf-409d-88e5-17753e72f1b0" (UID: "193104db-05bf-409d-88e5-17753e72f1b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.760878 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "193104db-05bf-409d-88e5-17753e72f1b0" (UID: "193104db-05bf-409d-88e5-17753e72f1b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.769862 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.060371187 podStartE2EDuration="49.769832422s" podCreationTimestamp="2026-01-05 22:12:04 +0000 UTC" firstStartedPulling="2026-01-05 22:12:06.227840345 +0000 UTC m=+1218.599839784" lastFinishedPulling="2026-01-05 22:12:52.93730158 +0000 UTC m=+1265.309301019" observedRunningTime="2026-01-05 22:12:53.696592608 +0000 UTC m=+1266.068592047" watchObservedRunningTime="2026-01-05 22:12:53.769832422 +0000 UTC m=+1266.141831861" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.779323 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-config" (OuterVolumeSpecName: "config") pod "193104db-05bf-409d-88e5-17753e72f1b0" (UID: "193104db-05bf-409d-88e5-17753e72f1b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.790391 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.790448 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.790466 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.802469 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.817812 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "193104db-05bf-409d-88e5-17753e72f1b0" (UID: "193104db-05bf-409d-88e5-17753e72f1b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.831055 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "193104db-05bf-409d-88e5-17753e72f1b0" (UID: "193104db-05bf-409d-88e5-17753e72f1b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.895438 5034 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.895594 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/193104db-05bf-409d-88e5-17753e72f1b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:53 crc kubenswrapper[5034]: I0105 22:12:53.995159 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-8jvdk"] Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.008513 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-8jvdk"] Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.031049 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-c5krh"] Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.668955 5034 generic.go:334] "Generic (PLEG): container finished" podID="1c240b88-16fc-469e-b12f-64f70d8cde97" containerID="02b44ac4b589fbdb6d5faa7d0afe501cc59257a7b136459596e6f7d7f14cc1f2" exitCode=0 Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.669538 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" event={"ID":"1c240b88-16fc-469e-b12f-64f70d8cde97","Type":"ContainerDied","Data":"02b44ac4b589fbdb6d5faa7d0afe501cc59257a7b136459596e6f7d7f14cc1f2"} Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.669579 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" event={"ID":"1c240b88-16fc-469e-b12f-64f70d8cde97","Type":"ContainerStarted","Data":"0dfc55b3e22055a1bc7a302296a718c77497ea8f35f289bc3903ea7cef7a0f41"} Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.684746 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0","Type":"ContainerStarted","Data":"1b03df80ae5811be033c9ceff5806e23b41751179cb5b11e405bc8df79b317b0"} Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.689221 5034 generic.go:334] "Generic (PLEG): container finished" podID="ac333e19-4263-460f-8fe5-d950677ef64f" containerID="886d080e5ba90abd5e79ae3895420c010b8591b45e6fdc95c67306a31eca611b" exitCode=0 Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.689258 5034 generic.go:334] "Generic (PLEG): container finished" podID="ac333e19-4263-460f-8fe5-d950677ef64f" containerID="99b7159e7efd2259210842400db47567ccc1de87c8306bb805a1cb95198ca5ab" exitCode=2 Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.689266 5034 generic.go:334] "Generic (PLEG): container finished" podID="ac333e19-4263-460f-8fe5-d950677ef64f" containerID="bff58d4173744054a45b690f7021750cf32fe87700ffee14cd9c2456b9349762" exitCode=0 Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.689290 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac333e19-4263-460f-8fe5-d950677ef64f","Type":"ContainerDied","Data":"886d080e5ba90abd5e79ae3895420c010b8591b45e6fdc95c67306a31eca611b"} Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.689318 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac333e19-4263-460f-8fe5-d950677ef64f","Type":"ContainerDied","Data":"99b7159e7efd2259210842400db47567ccc1de87c8306bb805a1cb95198ca5ab"} Jan 05 22:12:54 crc kubenswrapper[5034]: I0105 22:12:54.689330 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac333e19-4263-460f-8fe5-d950677ef64f","Type":"ContainerDied","Data":"bff58d4173744054a45b690f7021750cf32fe87700ffee14cd9c2456b9349762"} Jan 05 22:12:55 crc kubenswrapper[5034]: I0105 22:12:55.238305 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:12:55 crc kubenswrapper[5034]: I0105 22:12:55.716794 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0","Type":"ContainerStarted","Data":"78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea"} Jan 05 22:12:55 crc kubenswrapper[5034]: I0105 22:12:55.720453 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" event={"ID":"1c240b88-16fc-469e-b12f-64f70d8cde97","Type":"ContainerStarted","Data":"1bed29012010123310b4525d0409cb3ef6a3c05ba563c641a31b318cc916ccd9"} Jan 05 22:12:55 crc kubenswrapper[5034]: I0105 22:12:55.721710 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:12:55 crc kubenswrapper[5034]: I0105 22:12:55.723772 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5","Type":"ContainerStarted","Data":"c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616"} Jan 05 22:12:55 crc kubenswrapper[5034]: I0105 22:12:55.751357 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" podStartSLOduration=4.751305889 podStartE2EDuration="4.751305889s" podCreationTimestamp="2026-01-05 22:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:55.747283786 +0000 UTC m=+1268.119283225" watchObservedRunningTime="2026-01-05 22:12:55.751305889 +0000 UTC m=+1268.123305318" Jan 05 22:12:55 crc kubenswrapper[5034]: I0105 22:12:55.857510 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="193104db-05bf-409d-88e5-17753e72f1b0" path="/var/lib/kubelet/pods/193104db-05bf-409d-88e5-17753e72f1b0/volumes" Jan 05 22:12:56 crc kubenswrapper[5034]: I0105 22:12:56.741714 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0","Type":"ContainerStarted","Data":"54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02"} Jan 05 22:12:56 crc kubenswrapper[5034]: I0105 22:12:56.742329 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 05 22:12:56 crc kubenswrapper[5034]: I0105 22:12:56.741812 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" containerName="cinder-api-log" containerID="cri-o://78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea" gracePeriod=30 Jan 05 22:12:56 crc kubenswrapper[5034]: I0105 22:12:56.742455 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" containerName="cinder-api" containerID="cri-o://54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02" gracePeriod=30 Jan 05 22:12:56 crc kubenswrapper[5034]: I0105 22:12:56.751825 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5","Type":"ContainerStarted","Data":"2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6"} Jan 05 22:12:56 crc kubenswrapper[5034]: I0105 22:12:56.803567 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.803545604 podStartE2EDuration="4.803545604s" podCreationTimestamp="2026-01-05 22:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:12:56.779563315 +0000 UTC m=+1269.151562754" watchObservedRunningTime="2026-01-05 22:12:56.803545604 +0000 UTC m=+1269.175545043" Jan 05 22:12:56 crc kubenswrapper[5034]: I0105 22:12:56.804502 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.8009688950000005 podStartE2EDuration="5.804495241s" podCreationTimestamp="2026-01-05 22:12:51 +0000 UTC" firstStartedPulling="2026-01-05 22:12:53.507732948 +0000 UTC m=+1265.879732387" lastFinishedPulling="2026-01-05 22:12:54.511259294 +0000 UTC m=+1266.883258733" observedRunningTime="2026-01-05 22:12:56.801314881 +0000 UTC m=+1269.173314340" watchObservedRunningTime="2026-01-05 22:12:56.804495241 +0000 UTC m=+1269.176494680" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.284213 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.381539 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.484740 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data-custom\") pod \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.484800 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-etc-machine-id\") pod \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.484870 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-scripts\") pod \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.484927 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data\") pod \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.485193 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-combined-ca-bundle\") pod \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.485972 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" (UID: "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.486036 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cmb6\" (UniqueName: \"kubernetes.io/projected/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-kube-api-access-2cmb6\") pod \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.486095 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-logs\") pod \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\" (UID: \"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.486634 5034 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.486982 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-logs" (OuterVolumeSpecName: "logs") pod "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" (UID: "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.493787 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" (UID: "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.494371 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-kube-api-access-2cmb6" (OuterVolumeSpecName: "kube-api-access-2cmb6") pod "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" (UID: "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0"). InnerVolumeSpecName "kube-api-access-2cmb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.495258 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-scripts" (OuterVolumeSpecName: "scripts") pod "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" (UID: "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.520802 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" (UID: "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.550750 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data" (OuterVolumeSpecName: "config-data") pod "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" (UID: "58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.588086 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.588472 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.588483 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cmb6\" (UniqueName: \"kubernetes.io/projected/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-kube-api-access-2cmb6\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.588497 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.588507 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.588515 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.779833 5034 generic.go:334] "Generic (PLEG): container finished" podID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" containerID="54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02" exitCode=0 Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.779867 5034 generic.go:334] "Generic (PLEG): container finished" podID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" containerID="78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea" exitCode=143 Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.779935 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0","Type":"ContainerDied","Data":"54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02"} Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.779958 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.779967 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0","Type":"ContainerDied","Data":"78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea"} Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.780059 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0","Type":"ContainerDied","Data":"1b03df80ae5811be033c9ceff5806e23b41751179cb5b11e405bc8df79b317b0"} Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.779978 5034 scope.go:117] "RemoveContainer" containerID="54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.787273 5034 generic.go:334] "Generic (PLEG): container finished" podID="ac333e19-4263-460f-8fe5-d950677ef64f" containerID="77f7407b3079cc0feea7e70b078274014c8ff4e2ca6b9213f64bc2e1da621b38" exitCode=0 Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.791454 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac333e19-4263-460f-8fe5-d950677ef64f","Type":"ContainerDied","Data":"77f7407b3079cc0feea7e70b078274014c8ff4e2ca6b9213f64bc2e1da621b38"} Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.823450 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.830090 5034 scope.go:117] "RemoveContainer" containerID="78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.855069 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.862678 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:12:57 crc kubenswrapper[5034]: E0105 22:12:57.863224 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" containerName="cinder-api-log" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.863247 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" containerName="cinder-api-log" Jan 05 22:12:57 crc kubenswrapper[5034]: E0105 22:12:57.863268 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" containerName="cinder-api" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.863276 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" containerName="cinder-api" Jan 05 22:12:57 crc kubenswrapper[5034]: E0105 22:12:57.863310 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193104db-05bf-409d-88e5-17753e72f1b0" containerName="dnsmasq-dns" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.863318 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="193104db-05bf-409d-88e5-17753e72f1b0" containerName="dnsmasq-dns" Jan 05 22:12:57 crc kubenswrapper[5034]: E0105 22:12:57.863325 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193104db-05bf-409d-88e5-17753e72f1b0" containerName="init" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.863333 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="193104db-05bf-409d-88e5-17753e72f1b0" containerName="init" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.863516 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" containerName="cinder-api" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.863534 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" containerName="cinder-api-log" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.863547 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="193104db-05bf-409d-88e5-17753e72f1b0" containerName="dnsmasq-dns" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.868128 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.872930 5034 scope.go:117] "RemoveContainer" containerID="54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.873160 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.873176 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.873822 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 05 22:12:57 crc kubenswrapper[5034]: E0105 22:12:57.880266 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02\": container with ID starting with 54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02 not found: ID does not exist" containerID="54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.880326 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02"} err="failed to get container status \"54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02\": rpc error: code = NotFound desc = could not find container \"54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02\": container with ID starting with 54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02 not found: ID does not exist" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.880360 5034 scope.go:117] "RemoveContainer" containerID="78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.880588 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:12:57 crc kubenswrapper[5034]: E0105 22:12:57.882897 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea\": container with ID starting with 78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea not found: ID does not exist" containerID="78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.882939 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea"} err="failed to get container status \"78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea\": rpc error: code = NotFound desc = could not find container \"78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea\": container with ID starting with 78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea not found: ID does not exist" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.882971 5034 scope.go:117] "RemoveContainer" containerID="54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.885917 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.892469 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02"} err="failed to get container status \"54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02\": rpc error: code = NotFound desc = could not find container \"54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02\": container with ID starting with 54cd2dc3c1b69e1dffb43e84f7df218f10b299c05818cfcb5d9c8cbc91739d02 not found: ID does not exist" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.892516 5034 scope.go:117] "RemoveContainer" containerID="78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.893189 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea"} err="failed to get container status \"78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea\": rpc error: code = NotFound desc = could not find container \"78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea\": container with ID starting with 78b10527e143b483eb382aae6c77cc646c941b63cf13d0d929b0d6eea8f79dea not found: ID does not exist" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.997317 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-scripts\") pod \"ac333e19-4263-460f-8fe5-d950677ef64f\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.997393 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-config-data\") pod \"ac333e19-4263-460f-8fe5-d950677ef64f\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.997434 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhp49\" (UniqueName: \"kubernetes.io/projected/ac333e19-4263-460f-8fe5-d950677ef64f-kube-api-access-zhp49\") pod \"ac333e19-4263-460f-8fe5-d950677ef64f\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.997464 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-log-httpd\") pod \"ac333e19-4263-460f-8fe5-d950677ef64f\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.997483 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-run-httpd\") pod \"ac333e19-4263-460f-8fe5-d950677ef64f\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.997517 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-combined-ca-bundle\") pod \"ac333e19-4263-460f-8fe5-d950677ef64f\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.997653 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-sg-core-conf-yaml\") pod \"ac333e19-4263-460f-8fe5-d950677ef64f\" (UID: \"ac333e19-4263-460f-8fe5-d950677ef64f\") " Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.997915 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.997974 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data-custom\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.998063 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-scripts\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.998107 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.998143 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54834f39-7569-4cf3-812d-2c6d1bd161b8-logs\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.998181 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54834f39-7569-4cf3-812d-2c6d1bd161b8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.998286 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x84k2\" (UniqueName: \"kubernetes.io/projected/54834f39-7569-4cf3-812d-2c6d1bd161b8-kube-api-access-x84k2\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.998314 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.998350 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.998473 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ac333e19-4263-460f-8fe5-d950677ef64f" (UID: "ac333e19-4263-460f-8fe5-d950677ef64f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:12:57 crc kubenswrapper[5034]: I0105 22:12:57.999220 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ac333e19-4263-460f-8fe5-d950677ef64f" (UID: "ac333e19-4263-460f-8fe5-d950677ef64f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.005664 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac333e19-4263-460f-8fe5-d950677ef64f-kube-api-access-zhp49" (OuterVolumeSpecName: "kube-api-access-zhp49") pod "ac333e19-4263-460f-8fe5-d950677ef64f" (UID: "ac333e19-4263-460f-8fe5-d950677ef64f"). InnerVolumeSpecName "kube-api-access-zhp49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.008730 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-scripts" (OuterVolumeSpecName: "scripts") pod "ac333e19-4263-460f-8fe5-d950677ef64f" (UID: "ac333e19-4263-460f-8fe5-d950677ef64f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.032485 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ac333e19-4263-460f-8fe5-d950677ef64f" (UID: "ac333e19-4263-460f-8fe5-d950677ef64f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.094440 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac333e19-4263-460f-8fe5-d950677ef64f" (UID: "ac333e19-4263-460f-8fe5-d950677ef64f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100010 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54834f39-7569-4cf3-812d-2c6d1bd161b8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100161 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x84k2\" (UniqueName: \"kubernetes.io/projected/54834f39-7569-4cf3-812d-2c6d1bd161b8-kube-api-access-x84k2\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100181 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100209 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100231 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100293 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data-custom\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100345 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-scripts\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100367 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100394 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54834f39-7569-4cf3-812d-2c6d1bd161b8-logs\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100447 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100458 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhp49\" (UniqueName: \"kubernetes.io/projected/ac333e19-4263-460f-8fe5-d950677ef64f-kube-api-access-zhp49\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100467 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100475 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac333e19-4263-460f-8fe5-d950677ef64f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100485 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100495 5034 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100915 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54834f39-7569-4cf3-812d-2c6d1bd161b8-logs\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.100972 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54834f39-7569-4cf3-812d-2c6d1bd161b8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.105955 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.106547 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data-custom\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.111876 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.115617 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.116651 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-scripts\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.125989 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-config-data" (OuterVolumeSpecName: "config-data") pod "ac333e19-4263-460f-8fe5-d950677ef64f" (UID: "ac333e19-4263-460f-8fe5-d950677ef64f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.128212 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.130422 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x84k2\" (UniqueName: \"kubernetes.io/projected/54834f39-7569-4cf3-812d-2c6d1bd161b8-kube-api-access-x84k2\") pod \"cinder-api-0\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.202012 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac333e19-4263-460f-8fe5-d950677ef64f-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.233786 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.599428 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55b7fd45b4-gmqkw" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:40706->10.217.0.159:9311: read: connection reset by peer" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.599576 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55b7fd45b4-gmqkw" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:40696->10.217.0.159:9311: read: connection reset by peer" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.701849 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:12:58 crc kubenswrapper[5034]: W0105 22:12:58.706110 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54834f39_7569_4cf3_812d_2c6d1bd161b8.slice/crio-7f0345b7c44eec7a1a727560981fb89ee6b4ce7d95cc274cae05ee7604d64312 WatchSource:0}: Error finding container 7f0345b7c44eec7a1a727560981fb89ee6b4ce7d95cc274cae05ee7604d64312: Status 404 returned error can't find the container with id 7f0345b7c44eec7a1a727560981fb89ee6b4ce7d95cc274cae05ee7604d64312 Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.816675 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac333e19-4263-460f-8fe5-d950677ef64f","Type":"ContainerDied","Data":"7fab7983c003e6555c8aa8681dd55dd4827dab9898e12c28a207e008c3c93457"} Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.816735 5034 scope.go:117] "RemoveContainer" containerID="886d080e5ba90abd5e79ae3895420c010b8591b45e6fdc95c67306a31eca611b" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.816735 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.819048 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54834f39-7569-4cf3-812d-2c6d1bd161b8","Type":"ContainerStarted","Data":"7f0345b7c44eec7a1a727560981fb89ee6b4ce7d95cc274cae05ee7604d64312"} Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.824244 5034 generic.go:334] "Generic (PLEG): container finished" podID="f44d5480-b711-4d12-b8df-55cb25360488" containerID="f81f93bf4bcc59025be39bc83f701532657cba6298dc446886d5ccade7d655c6" exitCode=0 Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.824324 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b7fd45b4-gmqkw" event={"ID":"f44d5480-b711-4d12-b8df-55cb25360488","Type":"ContainerDied","Data":"f81f93bf4bcc59025be39bc83f701532657cba6298dc446886d5ccade7d655c6"} Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.843383 5034 scope.go:117] "RemoveContainer" containerID="99b7159e7efd2259210842400db47567ccc1de87c8306bb805a1cb95198ca5ab" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.878200 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.891500 5034 scope.go:117] "RemoveContainer" containerID="77f7407b3079cc0feea7e70b078274014c8ff4e2ca6b9213f64bc2e1da621b38" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.902221 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.911141 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:12:58 crc kubenswrapper[5034]: E0105 22:12:58.911598 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="ceilometer-central-agent" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.911611 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="ceilometer-central-agent" Jan 05 22:12:58 crc kubenswrapper[5034]: E0105 22:12:58.911622 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="ceilometer-notification-agent" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.911628 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="ceilometer-notification-agent" Jan 05 22:12:58 crc kubenswrapper[5034]: E0105 22:12:58.911639 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="proxy-httpd" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.911645 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="proxy-httpd" Jan 05 22:12:58 crc kubenswrapper[5034]: E0105 22:12:58.911660 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="sg-core" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.911666 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="sg-core" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.911823 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="sg-core" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.911844 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="ceilometer-notification-agent" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.911907 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="ceilometer-central-agent" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.911920 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" containerName="proxy-httpd" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.913578 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.922979 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.930067 5034 scope.go:117] "RemoveContainer" containerID="bff58d4173744054a45b690f7021750cf32fe87700ffee14cd9c2456b9349762" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.930590 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 22:12:58 crc kubenswrapper[5034]: I0105 22:12:58.930777 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.021390 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgrzf\" (UniqueName: \"kubernetes.io/projected/dd157695-0fa5-4a5d-a462-75675460fdf3-kube-api-access-vgrzf\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.021459 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-run-httpd\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.021585 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-config-data\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.021642 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.021678 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-log-httpd\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.021740 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.021879 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-scripts\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.124379 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-log-httpd\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.124844 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.124870 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-scripts\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.125851 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-log-httpd\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.125948 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgrzf\" (UniqueName: \"kubernetes.io/projected/dd157695-0fa5-4a5d-a462-75675460fdf3-kube-api-access-vgrzf\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.125987 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-run-httpd\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.126093 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-config-data\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.126194 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.126560 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-run-httpd\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.136981 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.137352 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.137545 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-config-data\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.137660 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-scripts\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.144716 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgrzf\" (UniqueName: \"kubernetes.io/projected/dd157695-0fa5-4a5d-a462-75675460fdf3-kube-api-access-vgrzf\") pod \"ceilometer-0\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.245461 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.257349 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.329706 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data\") pod \"f44d5480-b711-4d12-b8df-55cb25360488\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.329818 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44d5480-b711-4d12-b8df-55cb25360488-logs\") pod \"f44d5480-b711-4d12-b8df-55cb25360488\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.329941 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data-custom\") pod \"f44d5480-b711-4d12-b8df-55cb25360488\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.330056 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-combined-ca-bundle\") pod \"f44d5480-b711-4d12-b8df-55cb25360488\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.330273 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqrpx\" (UniqueName: \"kubernetes.io/projected/f44d5480-b711-4d12-b8df-55cb25360488-kube-api-access-hqrpx\") pod \"f44d5480-b711-4d12-b8df-55cb25360488\" (UID: \"f44d5480-b711-4d12-b8df-55cb25360488\") " Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.330519 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44d5480-b711-4d12-b8df-55cb25360488-logs" (OuterVolumeSpecName: "logs") pod "f44d5480-b711-4d12-b8df-55cb25360488" (UID: "f44d5480-b711-4d12-b8df-55cb25360488"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.330801 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44d5480-b711-4d12-b8df-55cb25360488-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.340957 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f44d5480-b711-4d12-b8df-55cb25360488" (UID: "f44d5480-b711-4d12-b8df-55cb25360488"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.341121 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44d5480-b711-4d12-b8df-55cb25360488-kube-api-access-hqrpx" (OuterVolumeSpecName: "kube-api-access-hqrpx") pod "f44d5480-b711-4d12-b8df-55cb25360488" (UID: "f44d5480-b711-4d12-b8df-55cb25360488"). InnerVolumeSpecName "kube-api-access-hqrpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.386525 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f44d5480-b711-4d12-b8df-55cb25360488" (UID: "f44d5480-b711-4d12-b8df-55cb25360488"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.412202 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data" (OuterVolumeSpecName: "config-data") pod "f44d5480-b711-4d12-b8df-55cb25360488" (UID: "f44d5480-b711-4d12-b8df-55cb25360488"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.432367 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.432398 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqrpx\" (UniqueName: \"kubernetes.io/projected/f44d5480-b711-4d12-b8df-55cb25360488-kube-api-access-hqrpx\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.432408 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.432421 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f44d5480-b711-4d12-b8df-55cb25360488-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.566462 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.737429 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:12:59 crc kubenswrapper[5034]: W0105 22:12:59.758160 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd157695_0fa5_4a5d_a462_75675460fdf3.slice/crio-0bc36fc6bdff89faf0342f4a74c8c5417b068cad247de733b5c4295f7ae1733d WatchSource:0}: Error finding container 0bc36fc6bdff89faf0342f4a74c8c5417b068cad247de733b5c4295f7ae1733d: Status 404 returned error can't find the container with id 0bc36fc6bdff89faf0342f4a74c8c5417b068cad247de733b5c4295f7ae1733d Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.858226 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0" path="/var/lib/kubelet/pods/58ef7cb8-f0cd-4bc1-ad7e-c76dbfcf72d0/volumes" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.859932 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac333e19-4263-460f-8fe5-d950677ef64f" path="/var/lib/kubelet/pods/ac333e19-4263-460f-8fe5-d950677ef64f/volumes" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.876728 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b7fd45b4-gmqkw" event={"ID":"f44d5480-b711-4d12-b8df-55cb25360488","Type":"ContainerDied","Data":"90336de22dbf7168d6ad2d024a3733c7082cabe1af1575e30b6fd377f03d033e"} Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.876789 5034 scope.go:117] "RemoveContainer" containerID="f81f93bf4bcc59025be39bc83f701532657cba6298dc446886d5ccade7d655c6" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.876905 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b7fd45b4-gmqkw" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.888843 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54834f39-7569-4cf3-812d-2c6d1bd161b8","Type":"ContainerStarted","Data":"8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c"} Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.889904 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd157695-0fa5-4a5d-a462-75675460fdf3","Type":"ContainerStarted","Data":"0bc36fc6bdff89faf0342f4a74c8c5417b068cad247de733b5c4295f7ae1733d"} Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.953588 5034 scope.go:117] "RemoveContainer" containerID="afcfeb6f8a89e08926ddc001cf81cf22fbb18f8ffc85fad65108fcbad7375510" Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.958963 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55b7fd45b4-gmqkw"] Jan 05 22:12:59 crc kubenswrapper[5034]: I0105 22:12:59.968599 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-55b7fd45b4-gmqkw"] Jan 05 22:13:00 crc kubenswrapper[5034]: I0105 22:13:00.914156 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54834f39-7569-4cf3-812d-2c6d1bd161b8","Type":"ContainerStarted","Data":"d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47"} Jan 05 22:13:00 crc kubenswrapper[5034]: I0105 22:13:00.914935 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 05 22:13:00 crc kubenswrapper[5034]: I0105 22:13:00.924916 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd157695-0fa5-4a5d-a462-75675460fdf3","Type":"ContainerStarted","Data":"e94ed2a822f01b1a0a13f5468cb23be6dd62055b0c88dc8bdd7c5cdd47496b81"} Jan 05 22:13:00 crc kubenswrapper[5034]: I0105 22:13:00.946222 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.946199008 podStartE2EDuration="3.946199008s" podCreationTimestamp="2026-01-05 22:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:00.937393818 +0000 UTC m=+1273.309393277" watchObservedRunningTime="2026-01-05 22:13:00.946199008 +0000 UTC m=+1273.318198447" Jan 05 22:13:01 crc kubenswrapper[5034]: I0105 22:13:01.854052 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44d5480-b711-4d12-b8df-55cb25360488" path="/var/lib/kubelet/pods/f44d5480-b711-4d12-b8df-55cb25360488/volumes" Jan 05 22:13:01 crc kubenswrapper[5034]: I0105 22:13:01.859554 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:13:01 crc kubenswrapper[5034]: I0105 22:13:01.933219 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8d6946db8-g2jm7"] Jan 05 22:13:01 crc kubenswrapper[5034]: I0105 22:13:01.933847 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8d6946db8-g2jm7" podUID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" containerName="neutron-api" containerID="cri-o://0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba" gracePeriod=30 Jan 05 22:13:01 crc kubenswrapper[5034]: I0105 22:13:01.934616 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8d6946db8-g2jm7" podUID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" containerName="neutron-httpd" containerID="cri-o://325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f" gracePeriod=30 Jan 05 22:13:01 crc kubenswrapper[5034]: I0105 22:13:01.951379 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd157695-0fa5-4a5d-a462-75675460fdf3","Type":"ContainerStarted","Data":"490e6ea138a45f3784fc49d709dcc86c11116775e90db5fe78d3999ad7a1233f"} Jan 05 22:13:01 crc kubenswrapper[5034]: I0105 22:13:01.951491 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd157695-0fa5-4a5d-a462-75675460fdf3","Type":"ContainerStarted","Data":"4c17fc6de0e641c864d73011a4352bc83d38c128d2fddc3d4bc1013886a878b1"} Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.382222 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.470698 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-w8l48"] Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.474458 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" podUID="e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" containerName="dnsmasq-dns" containerID="cri-o://fa897c95f67821d169e5ece96f92320860ef62c35cb6a30998ee88f589988069" gracePeriod=10 Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.544593 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.614719 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.965001 5034 generic.go:334] "Generic (PLEG): container finished" podID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" containerID="325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f" exitCode=0 Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.965227 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d6946db8-g2jm7" event={"ID":"5ab942c4-0db8-41b2-87c2-5bfedd95c49a","Type":"ContainerDied","Data":"325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f"} Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.975352 5034 generic.go:334] "Generic (PLEG): container finished" podID="e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" containerID="fa897c95f67821d169e5ece96f92320860ef62c35cb6a30998ee88f589988069" exitCode=0 Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.975423 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" event={"ID":"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8","Type":"ContainerDied","Data":"fa897c95f67821d169e5ece96f92320860ef62c35cb6a30998ee88f589988069"} Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.975676 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" containerName="cinder-scheduler" containerID="cri-o://c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616" gracePeriod=30 Jan 05 22:13:02 crc kubenswrapper[5034]: I0105 22:13:02.975781 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" containerName="probe" containerID="cri-o://2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6" gracePeriod=30 Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.090174 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.222389 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzgtm\" (UniqueName: \"kubernetes.io/projected/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-kube-api-access-lzgtm\") pod \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.222562 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-nb\") pod \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.222620 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-svc\") pod \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.222696 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-config\") pod \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.222797 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-swift-storage-0\") pod \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.222832 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-sb\") pod \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.231348 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-kube-api-access-lzgtm" (OuterVolumeSpecName: "kube-api-access-lzgtm") pod "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" (UID: "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8"). InnerVolumeSpecName "kube-api-access-lzgtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.286913 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-config" (OuterVolumeSpecName: "config") pod "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" (UID: "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.299633 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" (UID: "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.316817 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" (UID: "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.321302 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" (UID: "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.324704 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" (UID: "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.327154 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-svc\") pod \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\" (UID: \"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8\") " Jan 05 22:13:03 crc kubenswrapper[5034]: W0105 22:13:03.327289 5034 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8/volumes/kubernetes.io~configmap/dns-svc Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.327314 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" (UID: "e1a9540d-b6f0-40d6-8138-91dbf7efa1f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.327987 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzgtm\" (UniqueName: \"kubernetes.io/projected/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-kube-api-access-lzgtm\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.328033 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.328046 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.328056 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.328064 5034 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.328090 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.992467 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd157695-0fa5-4a5d-a462-75675460fdf3","Type":"ContainerStarted","Data":"e734ff8efbbf9f9dba64af395e7f26949b4ca0e43394f22225b26d2b2299af0d"} Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.993630 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.996786 5034 generic.go:334] "Generic (PLEG): container finished" podID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" containerID="2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6" exitCode=0 Jan 05 22:13:03 crc kubenswrapper[5034]: I0105 22:13:03.996840 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5","Type":"ContainerDied","Data":"2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6"} Jan 05 22:13:04 crc kubenswrapper[5034]: I0105 22:13:04.013801 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" event={"ID":"e1a9540d-b6f0-40d6-8138-91dbf7efa1f8","Type":"ContainerDied","Data":"46a61e9707f7d32bf998e5c5a9c4db4627051f831fb2f893bedd0542848c1580"} Jan 05 22:13:04 crc kubenswrapper[5034]: I0105 22:13:04.013923 5034 scope.go:117] "RemoveContainer" containerID="fa897c95f67821d169e5ece96f92320860ef62c35cb6a30998ee88f589988069" Jan 05 22:13:04 crc kubenswrapper[5034]: I0105 22:13:04.014219 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-w8l48" Jan 05 22:13:04 crc kubenswrapper[5034]: I0105 22:13:04.039229 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.0430392 podStartE2EDuration="6.039205559s" podCreationTimestamp="2026-01-05 22:12:58 +0000 UTC" firstStartedPulling="2026-01-05 22:12:59.76256574 +0000 UTC m=+1272.134565179" lastFinishedPulling="2026-01-05 22:13:02.758732099 +0000 UTC m=+1275.130731538" observedRunningTime="2026-01-05 22:13:04.021274201 +0000 UTC m=+1276.393273640" watchObservedRunningTime="2026-01-05 22:13:04.039205559 +0000 UTC m=+1276.411205008" Jan 05 22:13:04 crc kubenswrapper[5034]: I0105 22:13:04.065121 5034 scope.go:117] "RemoveContainer" containerID="dace7e39433a4fdf74bc06795a42ae68ec7df0a1e7b30dbb2b94b84c144a5a9c" Jan 05 22:13:04 crc kubenswrapper[5034]: I0105 22:13:04.078250 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-w8l48"] Jan 05 22:13:04 crc kubenswrapper[5034]: I0105 22:13:04.127277 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-w8l48"] Jan 05 22:13:05 crc kubenswrapper[5034]: I0105 22:13:05.851808 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" path="/var/lib/kubelet/pods/e1a9540d-b6f0-40d6-8138-91dbf7efa1f8/volumes" Jan 05 22:13:06 crc kubenswrapper[5034]: I0105 22:13:06.633904 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:13:06 crc kubenswrapper[5034]: I0105 22:13:06.634658 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.577226 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.710063 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.729039 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxftv\" (UniqueName: \"kubernetes.io/projected/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-kube-api-access-bxftv\") pod \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.738151 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-etc-machine-id\") pod \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.738246 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-scripts\") pod \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.738267 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data\") pod \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.738378 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data-custom\") pod \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.738451 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-combined-ca-bundle\") pod \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\" (UID: \"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5\") " Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.748659 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" (UID: "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.781768 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-scripts" (OuterVolumeSpecName: "scripts") pod "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" (UID: "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.783396 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" (UID: "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.786408 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-kube-api-access-bxftv" (OuterVolumeSpecName: "kube-api-access-bxftv") pod "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" (UID: "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5"). InnerVolumeSpecName "kube-api-access-bxftv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.855174 5034 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.855208 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.855220 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.855230 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxftv\" (UniqueName: \"kubernetes.io/projected/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-kube-api-access-bxftv\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.866331 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" (UID: "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.904241 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.944942 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data" (OuterVolumeSpecName: "config-data") pod "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" (UID: "ce7d50bf-c644-42a0-a922-49c7b8e4f7e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.957353 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:07 crc kubenswrapper[5034]: I0105 22:13:07.957400 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.057357 5034 generic.go:334] "Generic (PLEG): container finished" podID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" containerID="c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616" exitCode=0 Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.057417 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5","Type":"ContainerDied","Data":"c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616"} Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.057430 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.057949 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce7d50bf-c644-42a0-a922-49c7b8e4f7e5","Type":"ContainerDied","Data":"67783f2aa9303bb0333f3de7bb17b91d4a3d69d40b0546c16e934616eba788aa"} Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.057983 5034 scope.go:117] "RemoveContainer" containerID="2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.058518 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk2gp\" (UniqueName: \"kubernetes.io/projected/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-kube-api-access-pk2gp\") pod \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.058609 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-combined-ca-bundle\") pod \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.058671 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-config\") pod \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.058756 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-httpd-config\") pod \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.058879 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-ovndb-tls-certs\") pod \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\" (UID: \"5ab942c4-0db8-41b2-87c2-5bfedd95c49a\") " Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.063017 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-kube-api-access-pk2gp" (OuterVolumeSpecName: "kube-api-access-pk2gp") pod "5ab942c4-0db8-41b2-87c2-5bfedd95c49a" (UID: "5ab942c4-0db8-41b2-87c2-5bfedd95c49a"). InnerVolumeSpecName "kube-api-access-pk2gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.066351 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5ab942c4-0db8-41b2-87c2-5bfedd95c49a" (UID: "5ab942c4-0db8-41b2-87c2-5bfedd95c49a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.069458 5034 generic.go:334] "Generic (PLEG): container finished" podID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" containerID="0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba" exitCode=0 Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.069530 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d6946db8-g2jm7" event={"ID":"5ab942c4-0db8-41b2-87c2-5bfedd95c49a","Type":"ContainerDied","Data":"0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba"} Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.069649 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d6946db8-g2jm7" event={"ID":"5ab942c4-0db8-41b2-87c2-5bfedd95c49a","Type":"ContainerDied","Data":"4643281ecaea23e35654615d68c02f66dce02473a84c30efdd5e8d86be8fd0a7"} Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.069663 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d6946db8-g2jm7" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.089832 5034 scope.go:117] "RemoveContainer" containerID="c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.113588 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.119026 5034 scope.go:117] "RemoveContainer" containerID="2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.119833 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6\": container with ID starting with 2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6 not found: ID does not exist" containerID="2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.119879 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6"} err="failed to get container status \"2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6\": rpc error: code = NotFound desc = could not find container \"2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6\": container with ID starting with 2b18a70f3b5f57ede48318a6bbfbf865369cf3665144144fc25b49b6980214b6 not found: ID does not exist" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.119909 5034 scope.go:117] "RemoveContainer" containerID="c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.120305 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616\": container with ID starting with c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616 not found: ID does not exist" containerID="c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.120343 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616"} err="failed to get container status \"c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616\": rpc error: code = NotFound desc = could not find container \"c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616\": container with ID starting with c37836b672b880c70a495077a897f858318bf3bcd2f66c1f77ce14e055098616 not found: ID does not exist" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.120372 5034 scope.go:117] "RemoveContainer" containerID="325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.126231 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-config" (OuterVolumeSpecName: "config") pod "5ab942c4-0db8-41b2-87c2-5bfedd95c49a" (UID: "5ab942c4-0db8-41b2-87c2-5bfedd95c49a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.136692 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.154500 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.164118 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" containerName="dnsmasq-dns" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164148 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" containerName="dnsmasq-dns" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.164160 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" containerName="init" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164168 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" containerName="init" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.164179 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" containerName="probe" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164187 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" containerName="probe" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.164197 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" containerName="neutron-api" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164203 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" containerName="neutron-api" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.164220 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" containerName="cinder-scheduler" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164227 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" containerName="cinder-scheduler" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.164242 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164248 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.164266 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" containerName="neutron-httpd" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164272 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" containerName="neutron-httpd" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.164287 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api-log" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164293 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api-log" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164493 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" containerName="probe" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164513 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" containerName="neutron-api" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164526 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" containerName="neutron-httpd" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164546 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" containerName="cinder-scheduler" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164558 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api-log" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164570 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a9540d-b6f0-40d6-8138-91dbf7efa1f8" containerName="dnsmasq-dns" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.164582 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44d5480-b711-4d12-b8df-55cb25360488" containerName="barbican-api" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.168443 5034 scope.go:117] "RemoveContainer" containerID="0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.168757 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.171372 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.171553 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk2gp\" (UniqueName: \"kubernetes.io/projected/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-kube-api-access-pk2gp\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.171986 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.175872 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.182966 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.202723 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ab942c4-0db8-41b2-87c2-5bfedd95c49a" (UID: "5ab942c4-0db8-41b2-87c2-5bfedd95c49a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.226493 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5ab942c4-0db8-41b2-87c2-5bfedd95c49a" (UID: "5ab942c4-0db8-41b2-87c2-5bfedd95c49a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.274177 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.274245 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.274664 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-scripts\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.274850 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.274886 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.274912 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c85m\" (UniqueName: \"kubernetes.io/projected/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-kube-api-access-8c85m\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.275089 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.275108 5034 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab942c4-0db8-41b2-87c2-5bfedd95c49a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.308529 5034 scope.go:117] "RemoveContainer" containerID="325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.316723 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f\": container with ID starting with 325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f not found: ID does not exist" containerID="325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.316776 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f"} err="failed to get container status \"325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f\": rpc error: code = NotFound desc = could not find container \"325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f\": container with ID starting with 325cfc582f94579ac46b2070f35184f2098675dbf0d9f765e0fde86df377f43f not found: ID does not exist" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.316811 5034 scope.go:117] "RemoveContainer" containerID="0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba" Jan 05 22:13:08 crc kubenswrapper[5034]: E0105 22:13:08.320616 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba\": container with ID starting with 0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba not found: ID does not exist" containerID="0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.320738 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba"} err="failed to get container status \"0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba\": rpc error: code = NotFound desc = could not find container \"0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba\": container with ID starting with 0a98d3e17acbcdd790f4bf8593e77a8b256af60157e69d4999a23e8296b8c9ba not found: ID does not exist" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.377305 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.377448 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.377477 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-scripts\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.377500 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.377529 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.377557 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c85m\" (UniqueName: \"kubernetes.io/projected/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-kube-api-access-8c85m\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.382021 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.389902 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.390336 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-scripts\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.390685 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.397616 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.399789 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c85m\" (UniqueName: \"kubernetes.io/projected/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-kube-api-access-8c85m\") pod \"cinder-scheduler-0\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " pod="openstack/cinder-scheduler-0" Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.496159 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8d6946db8-g2jm7"] Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.507859 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8d6946db8-g2jm7"] Jan 05 22:13:08 crc kubenswrapper[5034]: I0105 22:13:08.612149 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 22:13:09 crc kubenswrapper[5034]: I0105 22:13:09.114400 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:13:09 crc kubenswrapper[5034]: I0105 22:13:09.854375 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab942c4-0db8-41b2-87c2-5bfedd95c49a" path="/var/lib/kubelet/pods/5ab942c4-0db8-41b2-87c2-5bfedd95c49a/volumes" Jan 05 22:13:09 crc kubenswrapper[5034]: I0105 22:13:09.855572 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7d50bf-c644-42a0-a922-49c7b8e4f7e5" path="/var/lib/kubelet/pods/ce7d50bf-c644-42a0-a922-49c7b8e4f7e5/volumes" Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.163095 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3a3c79c1-b936-44a0-bca1-68f7d69d8fab","Type":"ContainerStarted","Data":"598559a378fd6ca644d7dbe7962a49bd4a282bb0608ed7a7db6dd7fff095ac06"} Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.163144 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3a3c79c1-b936-44a0-bca1-68f7d69d8fab","Type":"ContainerStarted","Data":"3e2f2090471daafe5cf6771374c241607afa8b75057bd103cc16270ea95e6abd"} Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.743391 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.825803 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.826920 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.831737 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.833275 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.833502 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-crb2x" Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.852335 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.937413 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config-secret\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.937574 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.937622 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjgl8\" (UniqueName: \"kubernetes.io/projected/607d64db-b0e0-4933-bd7c-a3aaacb7586f-kube-api-access-mjgl8\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:10 crc kubenswrapper[5034]: I0105 22:13:10.937697 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.039579 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config-secret\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.039967 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.039994 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjgl8\" (UniqueName: \"kubernetes.io/projected/607d64db-b0e0-4933-bd7c-a3aaacb7586f-kube-api-access-mjgl8\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.040028 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.041678 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.046152 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.063806 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config-secret\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.071795 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjgl8\" (UniqueName: \"kubernetes.io/projected/607d64db-b0e0-4933-bd7c-a3aaacb7586f-kube-api-access-mjgl8\") pod \"openstackclient\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.167363 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.179495 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3a3c79c1-b936-44a0-bca1-68f7d69d8fab","Type":"ContainerStarted","Data":"72960a55513e9ecef8e41d52208d6be80b39479282ae6e2f1bc413cfa48dcc2f"} Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.244521 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.244501224 podStartE2EDuration="3.244501224s" podCreationTimestamp="2026-01-05 22:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:11.229696204 +0000 UTC m=+1283.601695643" watchObservedRunningTime="2026-01-05 22:13:11.244501224 +0000 UTC m=+1283.616500663" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.342823 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.360831 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.391657 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.393132 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.410468 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 22:13:11 crc kubenswrapper[5034]: E0105 22:13:11.545946 5034 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 05 22:13:11 crc kubenswrapper[5034]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_607d64db-b0e0-4933-bd7c-a3aaacb7586f_0(4a4d254306e41f3c80b5e8f7cc55d5f42c03eb5f1fa0ab766582f29499bc2540): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a4d254306e41f3c80b5e8f7cc55d5f42c03eb5f1fa0ab766582f29499bc2540" Netns:"/var/run/netns/608f6566-64bd-4b2c-b83b-375030b36cec" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=4a4d254306e41f3c80b5e8f7cc55d5f42c03eb5f1fa0ab766582f29499bc2540;K8S_POD_UID=607d64db-b0e0-4933-bd7c-a3aaacb7586f" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/607d64db-b0e0-4933-bd7c-a3aaacb7586f]: expected pod UID "607d64db-b0e0-4933-bd7c-a3aaacb7586f" but got "a63e68c4-06b7-4513-ac92-6415cbd75e88" from Kube API Jan 05 22:13:11 crc kubenswrapper[5034]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 05 22:13:11 crc kubenswrapper[5034]: > Jan 05 22:13:11 crc kubenswrapper[5034]: E0105 22:13:11.546454 5034 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 05 22:13:11 crc kubenswrapper[5034]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_607d64db-b0e0-4933-bd7c-a3aaacb7586f_0(4a4d254306e41f3c80b5e8f7cc55d5f42c03eb5f1fa0ab766582f29499bc2540): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a4d254306e41f3c80b5e8f7cc55d5f42c03eb5f1fa0ab766582f29499bc2540" Netns:"/var/run/netns/608f6566-64bd-4b2c-b83b-375030b36cec" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=4a4d254306e41f3c80b5e8f7cc55d5f42c03eb5f1fa0ab766582f29499bc2540;K8S_POD_UID=607d64db-b0e0-4933-bd7c-a3aaacb7586f" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/607d64db-b0e0-4933-bd7c-a3aaacb7586f]: expected pod UID "607d64db-b0e0-4933-bd7c-a3aaacb7586f" but got "a63e68c4-06b7-4513-ac92-6415cbd75e88" from Kube API Jan 05 22:13:11 crc kubenswrapper[5034]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 05 22:13:11 crc kubenswrapper[5034]: > pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.557057 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.557132 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.557155 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config-secret\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.557282 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htn8n\" (UniqueName: \"kubernetes.io/projected/a63e68c4-06b7-4513-ac92-6415cbd75e88-kube-api-access-htn8n\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.659713 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.659780 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.659807 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config-secret\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.659893 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htn8n\" (UniqueName: \"kubernetes.io/projected/a63e68c4-06b7-4513-ac92-6415cbd75e88-kube-api-access-htn8n\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.661185 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.665477 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config-secret\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.667577 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.682604 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htn8n\" (UniqueName: \"kubernetes.io/projected/a63e68c4-06b7-4513-ac92-6415cbd75e88-kube-api-access-htn8n\") pod \"openstackclient\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " pod="openstack/openstackclient" Jan 05 22:13:11 crc kubenswrapper[5034]: I0105 22:13:11.715789 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.189619 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.202148 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.206839 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="607d64db-b0e0-4933-bd7c-a3aaacb7586f" podUID="a63e68c4-06b7-4513-ac92-6415cbd75e88" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.270060 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-combined-ca-bundle\") pod \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.270294 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config-secret\") pod \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.270339 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config\") pod \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.270378 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjgl8\" (UniqueName: \"kubernetes.io/projected/607d64db-b0e0-4933-bd7c-a3aaacb7586f-kube-api-access-mjgl8\") pod \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\" (UID: \"607d64db-b0e0-4933-bd7c-a3aaacb7586f\") " Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.270837 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "607d64db-b0e0-4933-bd7c-a3aaacb7586f" (UID: "607d64db-b0e0-4933-bd7c-a3aaacb7586f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.270944 5034 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.276313 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607d64db-b0e0-4933-bd7c-a3aaacb7586f-kube-api-access-mjgl8" (OuterVolumeSpecName: "kube-api-access-mjgl8") pod "607d64db-b0e0-4933-bd7c-a3aaacb7586f" (UID: "607d64db-b0e0-4933-bd7c-a3aaacb7586f"). InnerVolumeSpecName "kube-api-access-mjgl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.277338 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "607d64db-b0e0-4933-bd7c-a3aaacb7586f" (UID: "607d64db-b0e0-4933-bd7c-a3aaacb7586f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.280282 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "607d64db-b0e0-4933-bd7c-a3aaacb7586f" (UID: "607d64db-b0e0-4933-bd7c-a3aaacb7586f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.290696 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.373069 5034 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.373150 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjgl8\" (UniqueName: \"kubernetes.io/projected/607d64db-b0e0-4933-bd7c-a3aaacb7586f-kube-api-access-mjgl8\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:12 crc kubenswrapper[5034]: I0105 22:13:12.373162 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607d64db-b0e0-4933-bd7c-a3aaacb7586f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:13 crc kubenswrapper[5034]: I0105 22:13:13.201911 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 22:13:13 crc kubenswrapper[5034]: I0105 22:13:13.201952 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a63e68c4-06b7-4513-ac92-6415cbd75e88","Type":"ContainerStarted","Data":"b486f58d6e642e8c3032ea53ecb40a1aa4cc2a3c34d0c6cdbf85ded59085792b"} Jan 05 22:13:13 crc kubenswrapper[5034]: I0105 22:13:13.217387 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="607d64db-b0e0-4933-bd7c-a3aaacb7586f" podUID="a63e68c4-06b7-4513-ac92-6415cbd75e88" Jan 05 22:13:13 crc kubenswrapper[5034]: I0105 22:13:13.613407 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 05 22:13:13 crc kubenswrapper[5034]: I0105 22:13:13.852577 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607d64db-b0e0-4933-bd7c-a3aaacb7586f" path="/var/lib/kubelet/pods/607d64db-b0e0-4933-bd7c-a3aaacb7586f/volumes" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.371427 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6576bc4c77-zzdbj"] Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.373475 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.379348 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.379575 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.379704 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.408727 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6576bc4c77-zzdbj"] Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.435249 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-combined-ca-bundle\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.435364 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-log-httpd\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.435435 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-config-data\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.435633 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rlsm\" (UniqueName: \"kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-kube-api-access-8rlsm\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.435878 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-public-tls-certs\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.435950 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-run-httpd\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.435983 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-internal-tls-certs\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.436012 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-etc-swift\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.538323 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-public-tls-certs\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.538451 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-run-httpd\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.538552 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-internal-tls-certs\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.538618 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-etc-swift\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.538666 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-combined-ca-bundle\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.538734 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-log-httpd\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.538858 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-config-data\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.538937 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rlsm\" (UniqueName: \"kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-kube-api-access-8rlsm\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.541036 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-log-httpd\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.541306 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-run-httpd\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.548913 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-internal-tls-certs\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.549186 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-combined-ca-bundle\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.550209 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-config-data\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.553091 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-etc-swift\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.555338 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-public-tls-certs\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.562259 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rlsm\" (UniqueName: \"kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-kube-api-access-8rlsm\") pod \"swift-proxy-6576bc4c77-zzdbj\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.698791 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.996308 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.997073 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="ceilometer-central-agent" containerID="cri-o://e94ed2a822f01b1a0a13f5468cb23be6dd62055b0c88dc8bdd7c5cdd47496b81" gracePeriod=30 Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.998044 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="proxy-httpd" containerID="cri-o://e734ff8efbbf9f9dba64af395e7f26949b4ca0e43394f22225b26d2b2299af0d" gracePeriod=30 Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.998129 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="sg-core" containerID="cri-o://490e6ea138a45f3784fc49d709dcc86c11116775e90db5fe78d3999ad7a1233f" gracePeriod=30 Jan 05 22:13:15 crc kubenswrapper[5034]: I0105 22:13:15.998175 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="ceilometer-notification-agent" containerID="cri-o://4c17fc6de0e641c864d73011a4352bc83d38c128d2fddc3d4bc1013886a878b1" gracePeriod=30 Jan 05 22:13:16 crc kubenswrapper[5034]: I0105 22:13:16.103990 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.165:3000/\": read tcp 10.217.0.2:48788->10.217.0.165:3000: read: connection reset by peer" Jan 05 22:13:16 crc kubenswrapper[5034]: I0105 22:13:16.240762 5034 generic.go:334] "Generic (PLEG): container finished" podID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerID="e734ff8efbbf9f9dba64af395e7f26949b4ca0e43394f22225b26d2b2299af0d" exitCode=0 Jan 05 22:13:16 crc kubenswrapper[5034]: I0105 22:13:16.241283 5034 generic.go:334] "Generic (PLEG): container finished" podID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerID="490e6ea138a45f3784fc49d709dcc86c11116775e90db5fe78d3999ad7a1233f" exitCode=2 Jan 05 22:13:16 crc kubenswrapper[5034]: I0105 22:13:16.240948 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd157695-0fa5-4a5d-a462-75675460fdf3","Type":"ContainerDied","Data":"e734ff8efbbf9f9dba64af395e7f26949b4ca0e43394f22225b26d2b2299af0d"} Jan 05 22:13:16 crc kubenswrapper[5034]: I0105 22:13:16.241343 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd157695-0fa5-4a5d-a462-75675460fdf3","Type":"ContainerDied","Data":"490e6ea138a45f3784fc49d709dcc86c11116775e90db5fe78d3999ad7a1233f"} Jan 05 22:13:16 crc kubenswrapper[5034]: I0105 22:13:16.366865 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6576bc4c77-zzdbj"] Jan 05 22:13:17 crc kubenswrapper[5034]: I0105 22:13:17.284392 5034 generic.go:334] "Generic (PLEG): container finished" podID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerID="e94ed2a822f01b1a0a13f5468cb23be6dd62055b0c88dc8bdd7c5cdd47496b81" exitCode=0 Jan 05 22:13:17 crc kubenswrapper[5034]: I0105 22:13:17.285132 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd157695-0fa5-4a5d-a462-75675460fdf3","Type":"ContainerDied","Data":"e94ed2a822f01b1a0a13f5468cb23be6dd62055b0c88dc8bdd7c5cdd47496b81"} Jan 05 22:13:17 crc kubenswrapper[5034]: I0105 22:13:17.289709 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6576bc4c77-zzdbj" event={"ID":"983e4ee8-36de-4b90-b18b-eed4db804a3d","Type":"ContainerStarted","Data":"ac34648fd83cfc034ae26bdfef9f0975cae1c33e8e010cf1a87f4c7182f7691b"} Jan 05 22:13:17 crc kubenswrapper[5034]: I0105 22:13:17.289758 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6576bc4c77-zzdbj" event={"ID":"983e4ee8-36de-4b90-b18b-eed4db804a3d","Type":"ContainerStarted","Data":"f1b3c35f63294d582fd4b25a3dbec8507d92d98c35739809ed98988da8b876c1"} Jan 05 22:13:17 crc kubenswrapper[5034]: I0105 22:13:17.289772 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6576bc4c77-zzdbj" event={"ID":"983e4ee8-36de-4b90-b18b-eed4db804a3d","Type":"ContainerStarted","Data":"fe049e21007d27706741459ab513d6445c87271929bdfdc42770f1165dfd1877"} Jan 05 22:13:17 crc kubenswrapper[5034]: I0105 22:13:17.294401 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:17 crc kubenswrapper[5034]: I0105 22:13:17.295642 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:17 crc kubenswrapper[5034]: I0105 22:13:17.324908 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6576bc4c77-zzdbj" podStartSLOduration=2.324869725 podStartE2EDuration="2.324869725s" podCreationTimestamp="2026-01-05 22:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:17.316624041 +0000 UTC m=+1289.688623470" watchObservedRunningTime="2026-01-05 22:13:17.324869725 +0000 UTC m=+1289.696869164" Jan 05 22:13:18 crc kubenswrapper[5034]: I0105 22:13:18.884520 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 05 22:13:19 crc kubenswrapper[5034]: I0105 22:13:19.316715 5034 generic.go:334] "Generic (PLEG): container finished" podID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerID="4c17fc6de0e641c864d73011a4352bc83d38c128d2fddc3d4bc1013886a878b1" exitCode=0 Jan 05 22:13:19 crc kubenswrapper[5034]: I0105 22:13:19.316762 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd157695-0fa5-4a5d-a462-75675460fdf3","Type":"ContainerDied","Data":"4c17fc6de0e641c864d73011a4352bc83d38c128d2fddc3d4bc1013886a878b1"} Jan 05 22:13:20 crc kubenswrapper[5034]: I0105 22:13:20.468917 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:13:20 crc kubenswrapper[5034]: I0105 22:13:20.469285 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.312480 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.382626 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd157695-0fa5-4a5d-a462-75675460fdf3","Type":"ContainerDied","Data":"0bc36fc6bdff89faf0342f4a74c8c5417b068cad247de733b5c4295f7ae1733d"} Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.382676 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.382691 5034 scope.go:117] "RemoveContainer" containerID="e734ff8efbbf9f9dba64af395e7f26949b4ca0e43394f22225b26d2b2299af0d" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.384951 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a63e68c4-06b7-4513-ac92-6415cbd75e88","Type":"ContainerStarted","Data":"da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419"} Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.407429 5034 scope.go:117] "RemoveContainer" containerID="490e6ea138a45f3784fc49d709dcc86c11116775e90db5fe78d3999ad7a1233f" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.417044 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.7597180730000002 podStartE2EDuration="13.417023874s" podCreationTimestamp="2026-01-05 22:13:11 +0000 UTC" firstStartedPulling="2026-01-05 22:13:12.288892497 +0000 UTC m=+1284.660891936" lastFinishedPulling="2026-01-05 22:13:23.946198298 +0000 UTC m=+1296.318197737" observedRunningTime="2026-01-05 22:13:24.408302847 +0000 UTC m=+1296.780302296" watchObservedRunningTime="2026-01-05 22:13:24.417023874 +0000 UTC m=+1296.789023313" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.426965 5034 scope.go:117] "RemoveContainer" containerID="4c17fc6de0e641c864d73011a4352bc83d38c128d2fddc3d4bc1013886a878b1" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.449286 5034 scope.go:117] "RemoveContainer" containerID="e94ed2a822f01b1a0a13f5468cb23be6dd62055b0c88dc8bdd7c5cdd47496b81" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.450546 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-config-data\") pod \"dd157695-0fa5-4a5d-a462-75675460fdf3\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.450645 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-combined-ca-bundle\") pod \"dd157695-0fa5-4a5d-a462-75675460fdf3\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.450732 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgrzf\" (UniqueName: \"kubernetes.io/projected/dd157695-0fa5-4a5d-a462-75675460fdf3-kube-api-access-vgrzf\") pod \"dd157695-0fa5-4a5d-a462-75675460fdf3\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.450785 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-sg-core-conf-yaml\") pod \"dd157695-0fa5-4a5d-a462-75675460fdf3\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.450840 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-run-httpd\") pod \"dd157695-0fa5-4a5d-a462-75675460fdf3\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.450916 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-log-httpd\") pod \"dd157695-0fa5-4a5d-a462-75675460fdf3\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.450965 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-scripts\") pod \"dd157695-0fa5-4a5d-a462-75675460fdf3\" (UID: \"dd157695-0fa5-4a5d-a462-75675460fdf3\") " Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.451835 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dd157695-0fa5-4a5d-a462-75675460fdf3" (UID: "dd157695-0fa5-4a5d-a462-75675460fdf3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.452821 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dd157695-0fa5-4a5d-a462-75675460fdf3" (UID: "dd157695-0fa5-4a5d-a462-75675460fdf3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.456551 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-scripts" (OuterVolumeSpecName: "scripts") pod "dd157695-0fa5-4a5d-a462-75675460fdf3" (UID: "dd157695-0fa5-4a5d-a462-75675460fdf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.456974 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd157695-0fa5-4a5d-a462-75675460fdf3-kube-api-access-vgrzf" (OuterVolumeSpecName: "kube-api-access-vgrzf") pod "dd157695-0fa5-4a5d-a462-75675460fdf3" (UID: "dd157695-0fa5-4a5d-a462-75675460fdf3"). InnerVolumeSpecName "kube-api-access-vgrzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.485843 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dd157695-0fa5-4a5d-a462-75675460fdf3" (UID: "dd157695-0fa5-4a5d-a462-75675460fdf3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.528223 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd157695-0fa5-4a5d-a462-75675460fdf3" (UID: "dd157695-0fa5-4a5d-a462-75675460fdf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.553361 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.553399 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.553407 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.553421 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgrzf\" (UniqueName: \"kubernetes.io/projected/dd157695-0fa5-4a5d-a462-75675460fdf3-kube-api-access-vgrzf\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.553429 5034 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.553437 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd157695-0fa5-4a5d-a462-75675460fdf3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.570485 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-config-data" (OuterVolumeSpecName: "config-data") pod "dd157695-0fa5-4a5d-a462-75675460fdf3" (UID: "dd157695-0fa5-4a5d-a462-75675460fdf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.655620 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd157695-0fa5-4a5d-a462-75675460fdf3-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.720313 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.730123 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.761233 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:24 crc kubenswrapper[5034]: E0105 22:13:24.761717 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="sg-core" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.761743 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="sg-core" Jan 05 22:13:24 crc kubenswrapper[5034]: E0105 22:13:24.761784 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="ceilometer-notification-agent" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.761795 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="ceilometer-notification-agent" Jan 05 22:13:24 crc kubenswrapper[5034]: E0105 22:13:24.761814 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="proxy-httpd" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.761823 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="proxy-httpd" Jan 05 22:13:24 crc kubenswrapper[5034]: E0105 22:13:24.761837 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="ceilometer-central-agent" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.761846 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="ceilometer-central-agent" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.762139 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="ceilometer-central-agent" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.762156 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="sg-core" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.762173 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="ceilometer-notification-agent" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.762185 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" containerName="proxy-httpd" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.764285 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.766443 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.767268 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.824585 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.860297 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.860356 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-scripts\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.860481 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-run-httpd\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.860816 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-log-httpd\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.860847 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-config-data\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.861192 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.861236 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmncp\" (UniqueName: \"kubernetes.io/projected/f80ad6fa-6594-4137-9771-2a82558004d8-kube-api-access-vmncp\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.963630 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.963703 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmncp\" (UniqueName: \"kubernetes.io/projected/f80ad6fa-6594-4137-9771-2a82558004d8-kube-api-access-vmncp\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.963762 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.963789 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-scripts\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.963822 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-run-httpd\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.963981 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-log-httpd\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.964004 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-config-data\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.964954 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-run-httpd\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.966275 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-log-httpd\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.972150 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.983243 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.983549 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-scripts\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:24 crc kubenswrapper[5034]: I0105 22:13:24.992567 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-config-data\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:25 crc kubenswrapper[5034]: I0105 22:13:25.008433 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmncp\" (UniqueName: \"kubernetes.io/projected/f80ad6fa-6594-4137-9771-2a82558004d8-kube-api-access-vmncp\") pod \"ceilometer-0\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " pod="openstack/ceilometer-0" Jan 05 22:13:25 crc kubenswrapper[5034]: I0105 22:13:25.113260 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:25 crc kubenswrapper[5034]: I0105 22:13:25.618805 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:25 crc kubenswrapper[5034]: W0105 22:13:25.623228 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf80ad6fa_6594_4137_9771_2a82558004d8.slice/crio-4bd99b7e6d367ba39fd3927834c91911432f1edb3bc7037671bac3372341e5a3 WatchSource:0}: Error finding container 4bd99b7e6d367ba39fd3927834c91911432f1edb3bc7037671bac3372341e5a3: Status 404 returned error can't find the container with id 4bd99b7e6d367ba39fd3927834c91911432f1edb3bc7037671bac3372341e5a3 Jan 05 22:13:25 crc kubenswrapper[5034]: I0105 22:13:25.709688 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:25 crc kubenswrapper[5034]: I0105 22:13:25.711630 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:13:25 crc kubenswrapper[5034]: I0105 22:13:25.725515 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:13:25 crc kubenswrapper[5034]: I0105 22:13:25.725806 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2ca0033b-cb51-4e99-83f5-da165dbaf071" containerName="glance-log" containerID="cri-o://08b481be8f4d588f1f224933b3fa92e6285f6fec55fd3d6fdd899a1c595c15c7" gracePeriod=30 Jan 05 22:13:25 crc kubenswrapper[5034]: I0105 22:13:25.725990 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2ca0033b-cb51-4e99-83f5-da165dbaf071" containerName="glance-httpd" containerID="cri-o://cbb1499179ed3cc68b798ca3c488b445cca4d8762d63d3c225b79659d6b899c1" gracePeriod=30 Jan 05 22:13:25 crc kubenswrapper[5034]: I0105 22:13:25.865903 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd157695-0fa5-4a5d-a462-75675460fdf3" path="/var/lib/kubelet/pods/dd157695-0fa5-4a5d-a462-75675460fdf3/volumes" Jan 05 22:13:26 crc kubenswrapper[5034]: I0105 22:13:26.418860 5034 generic.go:334] "Generic (PLEG): container finished" podID="2ca0033b-cb51-4e99-83f5-da165dbaf071" containerID="08b481be8f4d588f1f224933b3fa92e6285f6fec55fd3d6fdd899a1c595c15c7" exitCode=143 Jan 05 22:13:26 crc kubenswrapper[5034]: I0105 22:13:26.418936 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ca0033b-cb51-4e99-83f5-da165dbaf071","Type":"ContainerDied","Data":"08b481be8f4d588f1f224933b3fa92e6285f6fec55fd3d6fdd899a1c595c15c7"} Jan 05 22:13:26 crc kubenswrapper[5034]: I0105 22:13:26.421811 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f80ad6fa-6594-4137-9771-2a82558004d8","Type":"ContainerStarted","Data":"b846b51363b5a5ea2e9dd2ff21f86882982bb42de16766fe3a86fe46a88bebb8"} Jan 05 22:13:26 crc kubenswrapper[5034]: I0105 22:13:26.421870 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f80ad6fa-6594-4137-9771-2a82558004d8","Type":"ContainerStarted","Data":"4bd99b7e6d367ba39fd3927834c91911432f1edb3bc7037671bac3372341e5a3"} Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.005694 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.006091 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="760b010c-9b6e-4ed6-8ae0-9af72816c192" containerName="glance-log" containerID="cri-o://a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0" gracePeriod=30 Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.006250 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="760b010c-9b6e-4ed6-8ae0-9af72816c192" containerName="glance-httpd" containerID="cri-o://a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447" gracePeriod=30 Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.299314 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-f9vld"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.300901 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f9vld" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.351159 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-f9vld"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.405751 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6b89b"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.407616 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6b89b" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.423699 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6b89b"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.431974 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s5ff\" (UniqueName: \"kubernetes.io/projected/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-kube-api-access-9s5ff\") pod \"nova-api-db-create-f9vld\" (UID: \"4764f8ba-949a-4792-9cd1-2aae9c0a7d92\") " pod="openstack/nova-api-db-create-f9vld" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.432058 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-operator-scripts\") pod \"nova-api-db-create-f9vld\" (UID: \"4764f8ba-949a-4792-9cd1-2aae9c0a7d92\") " pod="openstack/nova-api-db-create-f9vld" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.447588 5034 generic.go:334] "Generic (PLEG): container finished" podID="760b010c-9b6e-4ed6-8ae0-9af72816c192" containerID="a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0" exitCode=143 Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.447683 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"760b010c-9b6e-4ed6-8ae0-9af72816c192","Type":"ContainerDied","Data":"a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0"} Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.450570 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f80ad6fa-6594-4137-9771-2a82558004d8","Type":"ContainerStarted","Data":"128d715ccc89a51fb06f8b3512013f8ea7ce34fb44fb6f123464783c33827010"} Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.487192 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a004-account-create-update-csgjm"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.488861 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a004-account-create-update-csgjm" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.496042 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.502243 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a004-account-create-update-csgjm"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.534440 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6235a7f3-12fc-455a-a4ba-a09957646334-operator-scripts\") pod \"nova-cell0-db-create-6b89b\" (UID: \"6235a7f3-12fc-455a-a4ba-a09957646334\") " pod="openstack/nova-cell0-db-create-6b89b" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.534537 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s5ff\" (UniqueName: \"kubernetes.io/projected/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-kube-api-access-9s5ff\") pod \"nova-api-db-create-f9vld\" (UID: \"4764f8ba-949a-4792-9cd1-2aae9c0a7d92\") " pod="openstack/nova-api-db-create-f9vld" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.534582 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-operator-scripts\") pod \"nova-api-db-create-f9vld\" (UID: \"4764f8ba-949a-4792-9cd1-2aae9c0a7d92\") " pod="openstack/nova-api-db-create-f9vld" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.534669 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6rtn\" (UniqueName: \"kubernetes.io/projected/6235a7f3-12fc-455a-a4ba-a09957646334-kube-api-access-t6rtn\") pod \"nova-cell0-db-create-6b89b\" (UID: \"6235a7f3-12fc-455a-a4ba-a09957646334\") " pod="openstack/nova-cell0-db-create-6b89b" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.535798 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-operator-scripts\") pod \"nova-api-db-create-f9vld\" (UID: \"4764f8ba-949a-4792-9cd1-2aae9c0a7d92\") " pod="openstack/nova-api-db-create-f9vld" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.557755 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s5ff\" (UniqueName: \"kubernetes.io/projected/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-kube-api-access-9s5ff\") pod \"nova-api-db-create-f9vld\" (UID: \"4764f8ba-949a-4792-9cd1-2aae9c0a7d92\") " pod="openstack/nova-api-db-create-f9vld" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.613500 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jw9kr"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.617545 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jw9kr" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.625526 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jw9kr"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.637541 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6rtn\" (UniqueName: \"kubernetes.io/projected/6235a7f3-12fc-455a-a4ba-a09957646334-kube-api-access-t6rtn\") pod \"nova-cell0-db-create-6b89b\" (UID: \"6235a7f3-12fc-455a-a4ba-a09957646334\") " pod="openstack/nova-cell0-db-create-6b89b" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.637634 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppwrm\" (UniqueName: \"kubernetes.io/projected/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-kube-api-access-ppwrm\") pod \"nova-api-a004-account-create-update-csgjm\" (UID: \"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3\") " pod="openstack/nova-api-a004-account-create-update-csgjm" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.637681 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6235a7f3-12fc-455a-a4ba-a09957646334-operator-scripts\") pod \"nova-cell0-db-create-6b89b\" (UID: \"6235a7f3-12fc-455a-a4ba-a09957646334\") " pod="openstack/nova-cell0-db-create-6b89b" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.637791 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-operator-scripts\") pod \"nova-api-a004-account-create-update-csgjm\" (UID: \"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3\") " pod="openstack/nova-api-a004-account-create-update-csgjm" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.638813 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6235a7f3-12fc-455a-a4ba-a09957646334-operator-scripts\") pod \"nova-cell0-db-create-6b89b\" (UID: \"6235a7f3-12fc-455a-a4ba-a09957646334\") " pod="openstack/nova-cell0-db-create-6b89b" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.670022 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6rtn\" (UniqueName: \"kubernetes.io/projected/6235a7f3-12fc-455a-a4ba-a09957646334-kube-api-access-t6rtn\") pod \"nova-cell0-db-create-6b89b\" (UID: \"6235a7f3-12fc-455a-a4ba-a09957646334\") " pod="openstack/nova-cell0-db-create-6b89b" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.672666 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f9vld" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.705141 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-2cvvx"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.706754 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.711202 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.725251 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-2cvvx"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.737913 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6b89b" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.743381 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wjbx\" (UniqueName: \"kubernetes.io/projected/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-kube-api-access-4wjbx\") pod \"nova-cell1-db-create-jw9kr\" (UID: \"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2\") " pod="openstack/nova-cell1-db-create-jw9kr" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.743882 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppwrm\" (UniqueName: \"kubernetes.io/projected/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-kube-api-access-ppwrm\") pod \"nova-api-a004-account-create-update-csgjm\" (UID: \"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3\") " pod="openstack/nova-api-a004-account-create-update-csgjm" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.743961 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-operator-scripts\") pod \"nova-cell1-db-create-jw9kr\" (UID: \"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2\") " pod="openstack/nova-cell1-db-create-jw9kr" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.744053 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-operator-scripts\") pod \"nova-api-a004-account-create-update-csgjm\" (UID: \"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3\") " pod="openstack/nova-api-a004-account-create-update-csgjm" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.744813 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-operator-scripts\") pod \"nova-api-a004-account-create-update-csgjm\" (UID: \"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3\") " pod="openstack/nova-api-a004-account-create-update-csgjm" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.785805 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppwrm\" (UniqueName: \"kubernetes.io/projected/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-kube-api-access-ppwrm\") pod \"nova-api-a004-account-create-update-csgjm\" (UID: \"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3\") " pod="openstack/nova-api-a004-account-create-update-csgjm" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.802176 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e1e8-account-create-update-z55c9"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.803380 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.808999 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.837156 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e1e8-account-create-update-z55c9"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.855157 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wjbx\" (UniqueName: \"kubernetes.io/projected/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-kube-api-access-4wjbx\") pod \"nova-cell1-db-create-jw9kr\" (UID: \"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2\") " pod="openstack/nova-cell1-db-create-jw9kr" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.855252 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-operator-scripts\") pod \"nova-cell1-db-create-jw9kr\" (UID: \"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2\") " pod="openstack/nova-cell1-db-create-jw9kr" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.855280 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-operator-scripts\") pod \"nova-cell0-dbbc-account-create-update-2cvvx\" (UID: \"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76\") " pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.855353 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqwm\" (UniqueName: \"kubernetes.io/projected/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-kube-api-access-njqwm\") pod \"nova-cell0-dbbc-account-create-update-2cvvx\" (UID: \"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76\") " pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.856340 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-operator-scripts\") pod \"nova-cell1-db-create-jw9kr\" (UID: \"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2\") " pod="openstack/nova-cell1-db-create-jw9kr" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.893156 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a004-account-create-update-csgjm" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.897002 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wjbx\" (UniqueName: \"kubernetes.io/projected/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-kube-api-access-4wjbx\") pod \"nova-cell1-db-create-jw9kr\" (UID: \"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2\") " pod="openstack/nova-cell1-db-create-jw9kr" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.952810 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.956684 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5lb\" (UniqueName: \"kubernetes.io/projected/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-kube-api-access-qm5lb\") pod \"nova-cell1-e1e8-account-create-update-z55c9\" (UID: \"ca996351-9e8b-45d0-91d2-7afc4c65f9cb\") " pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.956868 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqwm\" (UniqueName: \"kubernetes.io/projected/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-kube-api-access-njqwm\") pod \"nova-cell0-dbbc-account-create-update-2cvvx\" (UID: \"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76\") " pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.957019 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-operator-scripts\") pod \"nova-cell1-e1e8-account-create-update-z55c9\" (UID: \"ca996351-9e8b-45d0-91d2-7afc4c65f9cb\") " pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.957167 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-operator-scripts\") pod \"nova-cell0-dbbc-account-create-update-2cvvx\" (UID: \"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76\") " pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.957896 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-operator-scripts\") pod \"nova-cell0-dbbc-account-create-update-2cvvx\" (UID: \"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76\") " pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" Jan 05 22:13:27 crc kubenswrapper[5034]: I0105 22:13:27.981719 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqwm\" (UniqueName: \"kubernetes.io/projected/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-kube-api-access-njqwm\") pod \"nova-cell0-dbbc-account-create-update-2cvvx\" (UID: \"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76\") " pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.006794 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jw9kr" Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.027005 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.058732 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-operator-scripts\") pod \"nova-cell1-e1e8-account-create-update-z55c9\" (UID: \"ca996351-9e8b-45d0-91d2-7afc4c65f9cb\") " pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.059018 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5lb\" (UniqueName: \"kubernetes.io/projected/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-kube-api-access-qm5lb\") pod \"nova-cell1-e1e8-account-create-update-z55c9\" (UID: \"ca996351-9e8b-45d0-91d2-7afc4c65f9cb\") " pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.060230 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-operator-scripts\") pod \"nova-cell1-e1e8-account-create-update-z55c9\" (UID: \"ca996351-9e8b-45d0-91d2-7afc4c65f9cb\") " pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.078549 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5lb\" (UniqueName: \"kubernetes.io/projected/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-kube-api-access-qm5lb\") pod \"nova-cell1-e1e8-account-create-update-z55c9\" (UID: \"ca996351-9e8b-45d0-91d2-7afc4c65f9cb\") " pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.309774 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-f9vld"] Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.337587 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" Jan 05 22:13:28 crc kubenswrapper[5034]: W0105 22:13:28.346441 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4764f8ba_949a_4792_9cd1_2aae9c0a7d92.slice/crio-cf496edeb1f5101b7d4c9ba140d70e1798476d205685add04a03fed3821c624b WatchSource:0}: Error finding container cf496edeb1f5101b7d4c9ba140d70e1798476d205685add04a03fed3821c624b: Status 404 returned error can't find the container with id cf496edeb1f5101b7d4c9ba140d70e1798476d205685add04a03fed3821c624b Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.379498 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6b89b"] Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.463965 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a004-account-create-update-csgjm"] Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.480782 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f9vld" event={"ID":"4764f8ba-949a-4792-9cd1-2aae9c0a7d92","Type":"ContainerStarted","Data":"cf496edeb1f5101b7d4c9ba140d70e1798476d205685add04a03fed3821c624b"} Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.487497 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6b89b" event={"ID":"6235a7f3-12fc-455a-a4ba-a09957646334","Type":"ContainerStarted","Data":"28f86d5dc8838568a7f9978a0faac5230f39a36aa9b0996d12ee3ea391aba037"} Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.505205 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f80ad6fa-6594-4137-9771-2a82558004d8","Type":"ContainerStarted","Data":"50c718b5e1e2b526f111e8c5d327c1056a7b720ce590ad8212f96567dd3d5f54"} Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.775469 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jw9kr"] Jan 05 22:13:28 crc kubenswrapper[5034]: I0105 22:13:28.921394 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-2cvvx"] Jan 05 22:13:28 crc kubenswrapper[5034]: W0105 22:13:28.924852 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8e640fc_0cfe_430b_9b7a_90c5d68e6b76.slice/crio-840a7b4512f864223534e9af76806730ff59f697789d4e31ce48ce520fd3b593 WatchSource:0}: Error finding container 840a7b4512f864223534e9af76806730ff59f697789d4e31ce48ce520fd3b593: Status 404 returned error can't find the container with id 840a7b4512f864223534e9af76806730ff59f697789d4e31ce48ce520fd3b593 Jan 05 22:13:29 crc kubenswrapper[5034]: W0105 22:13:29.023634 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca996351_9e8b_45d0_91d2_7afc4c65f9cb.slice/crio-893c3f60ccb39b816b898fbcb1a651520f7c99cb29646d808cfe7a2d9be86446 WatchSource:0}: Error finding container 893c3f60ccb39b816b898fbcb1a651520f7c99cb29646d808cfe7a2d9be86446: Status 404 returned error can't find the container with id 893c3f60ccb39b816b898fbcb1a651520f7c99cb29646d808cfe7a2d9be86446 Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.024657 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e1e8-account-create-update-z55c9"] Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.627342 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jw9kr" event={"ID":"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2","Type":"ContainerStarted","Data":"4561867d9728d69a02d54c63e163f903bf49385633e98dca91da8fe50f6af5ae"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.627891 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jw9kr" event={"ID":"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2","Type":"ContainerStarted","Data":"15021707831d4ae32916ae39f5896278e148c9fc3d2aa217e733e9d8f0e3d9d5"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.665440 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-jw9kr" podStartSLOduration=2.665414048 podStartE2EDuration="2.665414048s" podCreationTimestamp="2026-01-05 22:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:29.66371247 +0000 UTC m=+1302.035711909" watchObservedRunningTime="2026-01-05 22:13:29.665414048 +0000 UTC m=+1302.037413487" Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.713384 5034 generic.go:334] "Generic (PLEG): container finished" podID="2ca0033b-cb51-4e99-83f5-da165dbaf071" containerID="cbb1499179ed3cc68b798ca3c488b445cca4d8762d63d3c225b79659d6b899c1" exitCode=0 Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.713512 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ca0033b-cb51-4e99-83f5-da165dbaf071","Type":"ContainerDied","Data":"cbb1499179ed3cc68b798ca3c488b445cca4d8762d63d3c225b79659d6b899c1"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.738713 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" event={"ID":"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76","Type":"ContainerStarted","Data":"5b6124b6c892a751bebde222de476c2c34313e4e4765b0115b760fb0f0795d43"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.738792 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" event={"ID":"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76","Type":"ContainerStarted","Data":"840a7b4512f864223534e9af76806730ff59f697789d4e31ce48ce520fd3b593"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.760212 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" event={"ID":"ca996351-9e8b-45d0-91d2-7afc4c65f9cb","Type":"ContainerStarted","Data":"a3a6a247d8e9e0c75348093c58cfb010339ed18d23fc59c36db9fe64e0087352"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.760259 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" event={"ID":"ca996351-9e8b-45d0-91d2-7afc4c65f9cb","Type":"ContainerStarted","Data":"893c3f60ccb39b816b898fbcb1a651520f7c99cb29646d808cfe7a2d9be86446"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.765130 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f9vld" event={"ID":"4764f8ba-949a-4792-9cd1-2aae9c0a7d92","Type":"ContainerStarted","Data":"bcb55346a2dd6fa4401dc3b99af08e3b463b472ebc05f09626fc4070fce1d44f"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.789295 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6b89b" event={"ID":"6235a7f3-12fc-455a-a4ba-a09957646334","Type":"ContainerStarted","Data":"3b497418fb4fddbdc3e540084f3bebc4162bfd69a04333747c2d1bc3236fe875"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.829410 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" podStartSLOduration=2.8293812320000002 podStartE2EDuration="2.829381232s" podCreationTimestamp="2026-01-05 22:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:29.77634303 +0000 UTC m=+1302.148342469" watchObservedRunningTime="2026-01-05 22:13:29.829381232 +0000 UTC m=+1302.201380661" Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.844695 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" podStartSLOduration=2.844673916 podStartE2EDuration="2.844673916s" podCreationTimestamp="2026-01-05 22:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:29.804373464 +0000 UTC m=+1302.176372913" watchObservedRunningTime="2026-01-05 22:13:29.844673916 +0000 UTC m=+1302.216673355" Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.888287 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-f9vld" podStartSLOduration=2.88826516 podStartE2EDuration="2.88826516s" podCreationTimestamp="2026-01-05 22:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:29.827130589 +0000 UTC m=+1302.199130028" watchObservedRunningTime="2026-01-05 22:13:29.88826516 +0000 UTC m=+1302.260264599" Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.895902 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a004-account-create-update-csgjm" event={"ID":"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3","Type":"ContainerStarted","Data":"0b0da96279521b01b3d67ec6905df324227804077a59066455d25dd323c8ca4c"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.895954 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a004-account-create-update-csgjm" event={"ID":"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3","Type":"ContainerStarted","Data":"a4090f30df0a8b9c45866d0f8bbe436fec4a4d5f1b8132c9b634c3f5183a5a92"} Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.899861 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-6b89b" podStartSLOduration=2.899840098 podStartE2EDuration="2.899840098s" podCreationTimestamp="2026-01-05 22:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:29.856624674 +0000 UTC m=+1302.228624113" watchObservedRunningTime="2026-01-05 22:13:29.899840098 +0000 UTC m=+1302.271839537" Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.907851 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a004-account-create-update-csgjm" podStartSLOduration=2.907827054 podStartE2EDuration="2.907827054s" podCreationTimestamp="2026-01-05 22:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:29.871039762 +0000 UTC m=+1302.243039201" watchObservedRunningTime="2026-01-05 22:13:29.907827054 +0000 UTC m=+1302.279826493" Jan 05 22:13:29 crc kubenswrapper[5034]: I0105 22:13:29.931317 5034 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf44d5480-b711-4d12-b8df-55cb25360488"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf44d5480-b711-4d12-b8df-55cb25360488] : Timed out while waiting for systemd to remove kubepods-besteffort-podf44d5480_b711_4d12_b8df_55cb25360488.slice" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.425220 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.538597 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"2ca0033b-cb51-4e99-83f5-da165dbaf071\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.538729 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-combined-ca-bundle\") pod \"2ca0033b-cb51-4e99-83f5-da165dbaf071\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.538775 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89qgt\" (UniqueName: \"kubernetes.io/projected/2ca0033b-cb51-4e99-83f5-da165dbaf071-kube-api-access-89qgt\") pod \"2ca0033b-cb51-4e99-83f5-da165dbaf071\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.538843 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-scripts\") pod \"2ca0033b-cb51-4e99-83f5-da165dbaf071\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.540246 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-httpd-run\") pod \"2ca0033b-cb51-4e99-83f5-da165dbaf071\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.540303 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-logs\") pod \"2ca0033b-cb51-4e99-83f5-da165dbaf071\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.540343 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-internal-tls-certs\") pod \"2ca0033b-cb51-4e99-83f5-da165dbaf071\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.540370 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-config-data\") pod \"2ca0033b-cb51-4e99-83f5-da165dbaf071\" (UID: \"2ca0033b-cb51-4e99-83f5-da165dbaf071\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.540882 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-logs" (OuterVolumeSpecName: "logs") pod "2ca0033b-cb51-4e99-83f5-da165dbaf071" (UID: "2ca0033b-cb51-4e99-83f5-da165dbaf071"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.541693 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.542021 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2ca0033b-cb51-4e99-83f5-da165dbaf071" (UID: "2ca0033b-cb51-4e99-83f5-da165dbaf071"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.566529 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-scripts" (OuterVolumeSpecName: "scripts") pod "2ca0033b-cb51-4e99-83f5-da165dbaf071" (UID: "2ca0033b-cb51-4e99-83f5-da165dbaf071"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.567602 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "2ca0033b-cb51-4e99-83f5-da165dbaf071" (UID: "2ca0033b-cb51-4e99-83f5-da165dbaf071"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.587072 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ca0033b-cb51-4e99-83f5-da165dbaf071" (UID: "2ca0033b-cb51-4e99-83f5-da165dbaf071"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.593955 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2ca0033b-cb51-4e99-83f5-da165dbaf071" (UID: "2ca0033b-cb51-4e99-83f5-da165dbaf071"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.606357 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca0033b-cb51-4e99-83f5-da165dbaf071-kube-api-access-89qgt" (OuterVolumeSpecName: "kube-api-access-89qgt") pod "2ca0033b-cb51-4e99-83f5-da165dbaf071" (UID: "2ca0033b-cb51-4e99-83f5-da165dbaf071"). InnerVolumeSpecName "kube-api-access-89qgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.645277 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.645325 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.645342 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89qgt\" (UniqueName: \"kubernetes.io/projected/2ca0033b-cb51-4e99-83f5-da165dbaf071-kube-api-access-89qgt\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.645357 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.645368 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ca0033b-cb51-4e99-83f5-da165dbaf071-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.645379 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.671272 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.671296 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-config-data" (OuterVolumeSpecName: "config-data") pod "2ca0033b-cb51-4e99-83f5-da165dbaf071" (UID: "2ca0033b-cb51-4e99-83f5-da165dbaf071"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.720096 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.747044 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ca0033b-cb51-4e99-83f5-da165dbaf071-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.747100 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.848247 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-config-data\") pod \"760b010c-9b6e-4ed6-8ae0-9af72816c192\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.848407 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-logs\") pod \"760b010c-9b6e-4ed6-8ae0-9af72816c192\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.848473 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-combined-ca-bundle\") pod \"760b010c-9b6e-4ed6-8ae0-9af72816c192\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.848501 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-scripts\") pod \"760b010c-9b6e-4ed6-8ae0-9af72816c192\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.848547 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-httpd-run\") pod \"760b010c-9b6e-4ed6-8ae0-9af72816c192\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.848571 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"760b010c-9b6e-4ed6-8ae0-9af72816c192\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.848603 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hfn2\" (UniqueName: \"kubernetes.io/projected/760b010c-9b6e-4ed6-8ae0-9af72816c192-kube-api-access-5hfn2\") pod \"760b010c-9b6e-4ed6-8ae0-9af72816c192\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.848720 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-public-tls-certs\") pod \"760b010c-9b6e-4ed6-8ae0-9af72816c192\" (UID: \"760b010c-9b6e-4ed6-8ae0-9af72816c192\") " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.851089 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "760b010c-9b6e-4ed6-8ae0-9af72816c192" (UID: "760b010c-9b6e-4ed6-8ae0-9af72816c192"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.851195 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-logs" (OuterVolumeSpecName: "logs") pod "760b010c-9b6e-4ed6-8ae0-9af72816c192" (UID: "760b010c-9b6e-4ed6-8ae0-9af72816c192"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.855482 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760b010c-9b6e-4ed6-8ae0-9af72816c192-kube-api-access-5hfn2" (OuterVolumeSpecName: "kube-api-access-5hfn2") pod "760b010c-9b6e-4ed6-8ae0-9af72816c192" (UID: "760b010c-9b6e-4ed6-8ae0-9af72816c192"). InnerVolumeSpecName "kube-api-access-5hfn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.856314 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-scripts" (OuterVolumeSpecName: "scripts") pod "760b010c-9b6e-4ed6-8ae0-9af72816c192" (UID: "760b010c-9b6e-4ed6-8ae0-9af72816c192"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.866259 5034 generic.go:334] "Generic (PLEG): container finished" podID="6235a7f3-12fc-455a-a4ba-a09957646334" containerID="3b497418fb4fddbdc3e540084f3bebc4162bfd69a04333747c2d1bc3236fe875" exitCode=0 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.866358 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6b89b" event={"ID":"6235a7f3-12fc-455a-a4ba-a09957646334","Type":"ContainerDied","Data":"3b497418fb4fddbdc3e540084f3bebc4162bfd69a04333747c2d1bc3236fe875"} Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.869634 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "760b010c-9b6e-4ed6-8ae0-9af72816c192" (UID: "760b010c-9b6e-4ed6-8ae0-9af72816c192"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.887924 5034 generic.go:334] "Generic (PLEG): container finished" podID="760b010c-9b6e-4ed6-8ae0-9af72816c192" containerID="a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447" exitCode=0 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.888242 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.888282 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"760b010c-9b6e-4ed6-8ae0-9af72816c192","Type":"ContainerDied","Data":"a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447"} Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.895344 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"760b010c-9b6e-4ed6-8ae0-9af72816c192","Type":"ContainerDied","Data":"18f10b0cf95084ff0f77ee2faf230fae11346422a9a664379ac4cc52f31d1209"} Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.895379 5034 scope.go:117] "RemoveContainer" containerID="a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.910918 5034 generic.go:334] "Generic (PLEG): container finished" podID="e8e640fc-0cfe-430b-9b7a-90c5d68e6b76" containerID="5b6124b6c892a751bebde222de476c2c34313e4e4765b0115b760fb0f0795d43" exitCode=0 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.911930 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" event={"ID":"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76","Type":"ContainerDied","Data":"5b6124b6c892a751bebde222de476c2c34313e4e4765b0115b760fb0f0795d43"} Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.936641 5034 generic.go:334] "Generic (PLEG): container finished" podID="4764f8ba-949a-4792-9cd1-2aae9c0a7d92" containerID="bcb55346a2dd6fa4401dc3b99af08e3b463b472ebc05f09626fc4070fce1d44f" exitCode=0 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.936855 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f9vld" event={"ID":"4764f8ba-949a-4792-9cd1-2aae9c0a7d92","Type":"ContainerDied","Data":"bcb55346a2dd6fa4401dc3b99af08e3b463b472ebc05f09626fc4070fce1d44f"} Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.944678 5034 generic.go:334] "Generic (PLEG): container finished" podID="f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3" containerID="0b0da96279521b01b3d67ec6905df324227804077a59066455d25dd323c8ca4c" exitCode=0 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.944777 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a004-account-create-update-csgjm" event={"ID":"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3","Type":"ContainerDied","Data":"0b0da96279521b01b3d67ec6905df324227804077a59066455d25dd323c8ca4c"} Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.956267 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f80ad6fa-6594-4137-9771-2a82558004d8","Type":"ContainerStarted","Data":"8ba76c9feefacb95617456c0e51dffc2fce6e5bb33fdac06f7b530c30750ad1e"} Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.956874 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.957066 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="ceilometer-central-agent" containerID="cri-o://b846b51363b5a5ea2e9dd2ff21f86882982bb42de16766fe3a86fe46a88bebb8" gracePeriod=30 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.957145 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="sg-core" containerID="cri-o://50c718b5e1e2b526f111e8c5d327c1056a7b720ce590ad8212f96567dd3d5f54" gracePeriod=30 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.957263 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="proxy-httpd" containerID="cri-o://8ba76c9feefacb95617456c0e51dffc2fce6e5bb33fdac06f7b530c30750ad1e" gracePeriod=30 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.957343 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="ceilometer-notification-agent" containerID="cri-o://128d715ccc89a51fb06f8b3512013f8ea7ce34fb44fb6f123464783c33827010" gracePeriod=30 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.959366 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.959441 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.959589 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/760b010c-9b6e-4ed6-8ae0-9af72816c192-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.959683 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.959745 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hfn2\" (UniqueName: \"kubernetes.io/projected/760b010c-9b6e-4ed6-8ae0-9af72816c192-kube-api-access-5hfn2\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.963224 5034 generic.go:334] "Generic (PLEG): container finished" podID="d2d0aa4e-b8be-479d-8583-bb3cd2a245f2" containerID="4561867d9728d69a02d54c63e163f903bf49385633e98dca91da8fe50f6af5ae" exitCode=0 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.963401 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jw9kr" event={"ID":"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2","Type":"ContainerDied","Data":"4561867d9728d69a02d54c63e163f903bf49385633e98dca91da8fe50f6af5ae"} Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.965782 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ca0033b-cb51-4e99-83f5-da165dbaf071","Type":"ContainerDied","Data":"6f9467df80d98371cd926be649409a442971ac2694e67b0e8432a8f3dca3cb68"} Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.965982 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.971954 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-config-data" (OuterVolumeSpecName: "config-data") pod "760b010c-9b6e-4ed6-8ae0-9af72816c192" (UID: "760b010c-9b6e-4ed6-8ae0-9af72816c192"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.978395 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "760b010c-9b6e-4ed6-8ae0-9af72816c192" (UID: "760b010c-9b6e-4ed6-8ae0-9af72816c192"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.980040 5034 generic.go:334] "Generic (PLEG): container finished" podID="ca996351-9e8b-45d0-91d2-7afc4c65f9cb" containerID="a3a6a247d8e9e0c75348093c58cfb010339ed18d23fc59c36db9fe64e0087352" exitCode=0 Jan 05 22:13:30 crc kubenswrapper[5034]: I0105 22:13:30.980130 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" event={"ID":"ca996351-9e8b-45d0-91d2-7afc4c65f9cb","Type":"ContainerDied","Data":"a3a6a247d8e9e0c75348093c58cfb010339ed18d23fc59c36db9fe64e0087352"} Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.007206 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "760b010c-9b6e-4ed6-8ae0-9af72816c192" (UID: "760b010c-9b6e-4ed6-8ae0-9af72816c192"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.017298 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.207708771 podStartE2EDuration="7.01727282s" podCreationTimestamp="2026-01-05 22:13:24 +0000 UTC" firstStartedPulling="2026-01-05 22:13:25.625777992 +0000 UTC m=+1297.997777421" lastFinishedPulling="2026-01-05 22:13:29.435342031 +0000 UTC m=+1301.807341470" observedRunningTime="2026-01-05 22:13:31.003928293 +0000 UTC m=+1303.375927742" watchObservedRunningTime="2026-01-05 22:13:31.01727282 +0000 UTC m=+1303.389272259" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.029659 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.050869 5034 scope.go:117] "RemoveContainer" containerID="a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.061464 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.061848 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.061916 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.061972 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/760b010c-9b6e-4ed6-8ae0-9af72816c192-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.076656 5034 scope.go:117] "RemoveContainer" containerID="a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447" Jan 05 22:13:31 crc kubenswrapper[5034]: E0105 22:13:31.081054 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447\": container with ID starting with a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447 not found: ID does not exist" containerID="a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.081179 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447"} err="failed to get container status \"a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447\": rpc error: code = NotFound desc = could not find container \"a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447\": container with ID starting with a7fd3598d825d79bf39372ce7c45d2ece6f2922a9ff1ec96d51c00c185435447 not found: ID does not exist" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.081228 5034 scope.go:117] "RemoveContainer" containerID="a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0" Jan 05 22:13:31 crc kubenswrapper[5034]: E0105 22:13:31.082302 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0\": container with ID starting with a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0 not found: ID does not exist" containerID="a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.082353 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0"} err="failed to get container status \"a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0\": rpc error: code = NotFound desc = could not find container \"a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0\": container with ID starting with a06d438781e8b04d124d791d3d2750cd5e4ec114e5848edc0dd1b411f33d64e0 not found: ID does not exist" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.082385 5034 scope.go:117] "RemoveContainer" containerID="cbb1499179ed3cc68b798ca3c488b445cca4d8762d63d3c225b79659d6b899c1" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.118678 5034 scope.go:117] "RemoveContainer" containerID="08b481be8f4d588f1f224933b3fa92e6285f6fec55fd3d6fdd899a1c595c15c7" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.122233 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.129607 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.137251 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:13:31 crc kubenswrapper[5034]: E0105 22:13:31.137753 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca0033b-cb51-4e99-83f5-da165dbaf071" containerName="glance-log" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.137783 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca0033b-cb51-4e99-83f5-da165dbaf071" containerName="glance-log" Jan 05 22:13:31 crc kubenswrapper[5034]: E0105 22:13:31.137800 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760b010c-9b6e-4ed6-8ae0-9af72816c192" containerName="glance-log" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.137807 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="760b010c-9b6e-4ed6-8ae0-9af72816c192" containerName="glance-log" Jan 05 22:13:31 crc kubenswrapper[5034]: E0105 22:13:31.137844 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca0033b-cb51-4e99-83f5-da165dbaf071" containerName="glance-httpd" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.137853 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca0033b-cb51-4e99-83f5-da165dbaf071" containerName="glance-httpd" Jan 05 22:13:31 crc kubenswrapper[5034]: E0105 22:13:31.137867 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760b010c-9b6e-4ed6-8ae0-9af72816c192" containerName="glance-httpd" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.137875 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="760b010c-9b6e-4ed6-8ae0-9af72816c192" containerName="glance-httpd" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.138102 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="760b010c-9b6e-4ed6-8ae0-9af72816c192" containerName="glance-log" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.138119 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca0033b-cb51-4e99-83f5-da165dbaf071" containerName="glance-httpd" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.138130 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca0033b-cb51-4e99-83f5-da165dbaf071" containerName="glance-log" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.138144 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="760b010c-9b6e-4ed6-8ae0-9af72816c192" containerName="glance-httpd" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.139260 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.142990 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.143443 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.147214 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.263701 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.265992 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.266050 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-logs\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.266126 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.266152 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.266166 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.266186 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvs6k\" (UniqueName: \"kubernetes.io/projected/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-kube-api-access-vvs6k\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.266229 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.266297 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.279144 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.293587 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.295815 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.298957 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.301569 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.303233 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.367969 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368037 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368066 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368104 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368125 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368450 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368477 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppbdr\" (UniqueName: \"kubernetes.io/projected/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-kube-api-access-ppbdr\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368496 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368521 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-logs\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368580 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368602 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368621 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368641 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvs6k\" (UniqueName: \"kubernetes.io/projected/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-kube-api-access-vvs6k\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368671 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368697 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-logs\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.368714 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.369908 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.372729 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.376242 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-logs\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.377159 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.379721 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.380500 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.384752 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.397854 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvs6k\" (UniqueName: \"kubernetes.io/projected/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-kube-api-access-vvs6k\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.400589 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.457986 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.471058 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-logs\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.471461 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.471536 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-logs\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.471638 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.471700 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.471742 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.471792 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.471910 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.472455 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppbdr\" (UniqueName: \"kubernetes.io/projected/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-kube-api-access-ppbdr\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.472765 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.473156 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.476651 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.477634 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.477841 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.479642 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.491813 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppbdr\" (UniqueName: \"kubernetes.io/projected/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-kube-api-access-ppbdr\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.503839 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.623880 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.856617 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca0033b-cb51-4e99-83f5-da165dbaf071" path="/var/lib/kubelet/pods/2ca0033b-cb51-4e99-83f5-da165dbaf071/volumes" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.861447 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760b010c-9b6e-4ed6-8ae0-9af72816c192" path="/var/lib/kubelet/pods/760b010c-9b6e-4ed6-8ae0-9af72816c192/volumes" Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.999115 5034 generic.go:334] "Generic (PLEG): container finished" podID="f80ad6fa-6594-4137-9771-2a82558004d8" containerID="8ba76c9feefacb95617456c0e51dffc2fce6e5bb33fdac06f7b530c30750ad1e" exitCode=0 Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.999169 5034 generic.go:334] "Generic (PLEG): container finished" podID="f80ad6fa-6594-4137-9771-2a82558004d8" containerID="50c718b5e1e2b526f111e8c5d327c1056a7b720ce590ad8212f96567dd3d5f54" exitCode=2 Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.999178 5034 generic.go:334] "Generic (PLEG): container finished" podID="f80ad6fa-6594-4137-9771-2a82558004d8" containerID="128d715ccc89a51fb06f8b3512013f8ea7ce34fb44fb6f123464783c33827010" exitCode=0 Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.999441 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f80ad6fa-6594-4137-9771-2a82558004d8","Type":"ContainerDied","Data":"8ba76c9feefacb95617456c0e51dffc2fce6e5bb33fdac06f7b530c30750ad1e"} Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.999491 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f80ad6fa-6594-4137-9771-2a82558004d8","Type":"ContainerDied","Data":"50c718b5e1e2b526f111e8c5d327c1056a7b720ce590ad8212f96567dd3d5f54"} Jan 05 22:13:31 crc kubenswrapper[5034]: I0105 22:13:31.999507 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f80ad6fa-6594-4137-9771-2a82558004d8","Type":"ContainerDied","Data":"128d715ccc89a51fb06f8b3512013f8ea7ce34fb44fb6f123464783c33827010"} Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.136698 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.491923 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.609297 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-operator-scripts\") pod \"ca996351-9e8b-45d0-91d2-7afc4c65f9cb\" (UID: \"ca996351-9e8b-45d0-91d2-7afc4c65f9cb\") " Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.609512 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm5lb\" (UniqueName: \"kubernetes.io/projected/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-kube-api-access-qm5lb\") pod \"ca996351-9e8b-45d0-91d2-7afc4c65f9cb\" (UID: \"ca996351-9e8b-45d0-91d2-7afc4c65f9cb\") " Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.610681 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca996351-9e8b-45d0-91d2-7afc4c65f9cb" (UID: "ca996351-9e8b-45d0-91d2-7afc4c65f9cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.631417 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-kube-api-access-qm5lb" (OuterVolumeSpecName: "kube-api-access-qm5lb") pod "ca996351-9e8b-45d0-91d2-7afc4c65f9cb" (UID: "ca996351-9e8b-45d0-91d2-7afc4c65f9cb"). InnerVolumeSpecName "kube-api-access-qm5lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.721522 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.721563 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm5lb\" (UniqueName: \"kubernetes.io/projected/ca996351-9e8b-45d0-91d2-7afc4c65f9cb-kube-api-access-qm5lb\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.792709 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.877991 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.930174 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-operator-scripts\") pod \"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76\" (UID: \"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76\") " Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.930243 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njqwm\" (UniqueName: \"kubernetes.io/projected/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-kube-api-access-njqwm\") pod \"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76\" (UID: \"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76\") " Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.933231 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8e640fc-0cfe-430b-9b7a-90c5d68e6b76" (UID: "e8e640fc-0cfe-430b-9b7a-90c5d68e6b76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:32 crc kubenswrapper[5034]: W0105 22:13:32.941509 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c2c8ddc_f82a_4cca_8a84_90c5713754cf.slice/crio-8f8e9a9d30451a023db9dc005c255f85630941006c94ae091f487b3412bebba6 WatchSource:0}: Error finding container 8f8e9a9d30451a023db9dc005c255f85630941006c94ae091f487b3412bebba6: Status 404 returned error can't find the container with id 8f8e9a9d30451a023db9dc005c255f85630941006c94ae091f487b3412bebba6 Jan 05 22:13:32 crc kubenswrapper[5034]: I0105 22:13:32.950370 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-kube-api-access-njqwm" (OuterVolumeSpecName: "kube-api-access-njqwm") pod "e8e640fc-0cfe-430b-9b7a-90c5d68e6b76" (UID: "e8e640fc-0cfe-430b-9b7a-90c5d68e6b76"). InnerVolumeSpecName "kube-api-access-njqwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.037521 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.037562 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njqwm\" (UniqueName: \"kubernetes.io/projected/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76-kube-api-access-njqwm\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.039275 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jw9kr" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.107144 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc1f97e4-be98-4c2a-b819-17d9c3b0be51","Type":"ContainerStarted","Data":"259ed9625b201feda963d9cd1abfab443233d945e13e03b12cc13d939ccc66d3"} Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.113433 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" event={"ID":"ca996351-9e8b-45d0-91d2-7afc4c65f9cb","Type":"ContainerDied","Data":"893c3f60ccb39b816b898fbcb1a651520f7c99cb29646d808cfe7a2d9be86446"} Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.113468 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e1e8-account-create-update-z55c9" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.113498 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="893c3f60ccb39b816b898fbcb1a651520f7c99cb29646d808cfe7a2d9be86446" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.119572 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c2c8ddc-f82a-4cca-8a84-90c5713754cf","Type":"ContainerStarted","Data":"8f8e9a9d30451a023db9dc005c255f85630941006c94ae091f487b3412bebba6"} Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.123920 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a004-account-create-update-csgjm" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.126437 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jw9kr" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.126925 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jw9kr" event={"ID":"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2","Type":"ContainerDied","Data":"15021707831d4ae32916ae39f5896278e148c9fc3d2aa217e733e9d8f0e3d9d5"} Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.129122 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15021707831d4ae32916ae39f5896278e148c9fc3d2aa217e733e9d8f0e3d9d5" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.134364 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" event={"ID":"e8e640fc-0cfe-430b-9b7a-90c5d68e6b76","Type":"ContainerDied","Data":"840a7b4512f864223534e9af76806730ff59f697789d4e31ce48ce520fd3b593"} Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.134398 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840a7b4512f864223534e9af76806730ff59f697789d4e31ce48ce520fd3b593" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.134429 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6b89b" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.134448 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dbbc-account-create-update-2cvvx" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.138844 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-operator-scripts\") pod \"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2\" (UID: \"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2\") " Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.138934 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wjbx\" (UniqueName: \"kubernetes.io/projected/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-kube-api-access-4wjbx\") pod \"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2\" (UID: \"d2d0aa4e-b8be-479d-8583-bb3cd2a245f2\") " Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.141193 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2d0aa4e-b8be-479d-8583-bb3cd2a245f2" (UID: "d2d0aa4e-b8be-479d-8583-bb3cd2a245f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.152846 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-kube-api-access-4wjbx" (OuterVolumeSpecName: "kube-api-access-4wjbx") pod "d2d0aa4e-b8be-479d-8583-bb3cd2a245f2" (UID: "d2d0aa4e-b8be-479d-8583-bb3cd2a245f2"). InnerVolumeSpecName "kube-api-access-4wjbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.157682 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f9vld" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.242661 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppwrm\" (UniqueName: \"kubernetes.io/projected/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-kube-api-access-ppwrm\") pod \"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3\" (UID: \"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3\") " Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.242802 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6235a7f3-12fc-455a-a4ba-a09957646334-operator-scripts\") pod \"6235a7f3-12fc-455a-a4ba-a09957646334\" (UID: \"6235a7f3-12fc-455a-a4ba-a09957646334\") " Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.242831 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-operator-scripts\") pod \"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3\" (UID: \"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3\") " Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.242938 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6rtn\" (UniqueName: \"kubernetes.io/projected/6235a7f3-12fc-455a-a4ba-a09957646334-kube-api-access-t6rtn\") pod \"6235a7f3-12fc-455a-a4ba-a09957646334\" (UID: \"6235a7f3-12fc-455a-a4ba-a09957646334\") " Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.242996 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s5ff\" (UniqueName: \"kubernetes.io/projected/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-kube-api-access-9s5ff\") pod \"4764f8ba-949a-4792-9cd1-2aae9c0a7d92\" (UID: \"4764f8ba-949a-4792-9cd1-2aae9c0a7d92\") " Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.243024 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-operator-scripts\") pod \"4764f8ba-949a-4792-9cd1-2aae9c0a7d92\" (UID: \"4764f8ba-949a-4792-9cd1-2aae9c0a7d92\") " Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.243791 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.243811 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wjbx\" (UniqueName: \"kubernetes.io/projected/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2-kube-api-access-4wjbx\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.248283 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3" (UID: "f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.250764 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6235a7f3-12fc-455a-a4ba-a09957646334-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6235a7f3-12fc-455a-a4ba-a09957646334" (UID: "6235a7f3-12fc-455a-a4ba-a09957646334"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.255730 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6235a7f3-12fc-455a-a4ba-a09957646334-kube-api-access-t6rtn" (OuterVolumeSpecName: "kube-api-access-t6rtn") pod "6235a7f3-12fc-455a-a4ba-a09957646334" (UID: "6235a7f3-12fc-455a-a4ba-a09957646334"). InnerVolumeSpecName "kube-api-access-t6rtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.257993 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4764f8ba-949a-4792-9cd1-2aae9c0a7d92" (UID: "4764f8ba-949a-4792-9cd1-2aae9c0a7d92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.258384 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-kube-api-access-ppwrm" (OuterVolumeSpecName: "kube-api-access-ppwrm") pod "f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3" (UID: "f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3"). InnerVolumeSpecName "kube-api-access-ppwrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.258532 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-kube-api-access-9s5ff" (OuterVolumeSpecName: "kube-api-access-9s5ff") pod "4764f8ba-949a-4792-9cd1-2aae9c0a7d92" (UID: "4764f8ba-949a-4792-9cd1-2aae9c0a7d92"). InnerVolumeSpecName "kube-api-access-9s5ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.346607 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6235a7f3-12fc-455a-a4ba-a09957646334-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.347155 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.347173 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6rtn\" (UniqueName: \"kubernetes.io/projected/6235a7f3-12fc-455a-a4ba-a09957646334-kube-api-access-t6rtn\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.347190 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s5ff\" (UniqueName: \"kubernetes.io/projected/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-kube-api-access-9s5ff\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.347201 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4764f8ba-949a-4792-9cd1-2aae9c0a7d92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:33 crc kubenswrapper[5034]: I0105 22:13:33.347213 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppwrm\" (UniqueName: \"kubernetes.io/projected/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3-kube-api-access-ppwrm\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.149207 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc1f97e4-be98-4c2a-b819-17d9c3b0be51","Type":"ContainerStarted","Data":"d0c499f0a927479b340ab820f58c0578043492c63baf9a7836426b0f832cdd3a"} Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.149617 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc1f97e4-be98-4c2a-b819-17d9c3b0be51","Type":"ContainerStarted","Data":"70ee7ba0dcf1db4ff6a1836f1e8d9db65589363dab8ad409a064b32b276d7892"} Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.159234 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f9vld" Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.160035 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f9vld" event={"ID":"4764f8ba-949a-4792-9cd1-2aae9c0a7d92","Type":"ContainerDied","Data":"cf496edeb1f5101b7d4c9ba140d70e1798476d205685add04a03fed3821c624b"} Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.160062 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf496edeb1f5101b7d4c9ba140d70e1798476d205685add04a03fed3821c624b" Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.166660 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6b89b" event={"ID":"6235a7f3-12fc-455a-a4ba-a09957646334","Type":"ContainerDied","Data":"28f86d5dc8838568a7f9978a0faac5230f39a36aa9b0996d12ee3ea391aba037"} Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.166707 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28f86d5dc8838568a7f9978a0faac5230f39a36aa9b0996d12ee3ea391aba037" Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.166800 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6b89b" Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.181182 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c2c8ddc-f82a-4cca-8a84-90c5713754cf","Type":"ContainerStarted","Data":"4ebce1e8d8500a36a9885aca5996773d3f25ae62e84300ef4e6448cbe1e4b976"} Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.195114 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a004-account-create-update-csgjm" event={"ID":"f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3","Type":"ContainerDied","Data":"a4090f30df0a8b9c45866d0f8bbe436fec4a4d5f1b8132c9b634c3f5183a5a92"} Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.196616 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a004-account-create-update-csgjm" Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.200219 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4090f30df0a8b9c45866d0f8bbe436fec4a4d5f1b8132c9b634c3f5183a5a92" Jan 05 22:13:34 crc kubenswrapper[5034]: I0105 22:13:34.200837 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.200803996 podStartE2EDuration="3.200803996s" podCreationTimestamp="2026-01-05 22:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:34.185856033 +0000 UTC m=+1306.557855472" watchObservedRunningTime="2026-01-05 22:13:34.200803996 +0000 UTC m=+1306.572803435" Jan 05 22:13:35 crc kubenswrapper[5034]: I0105 22:13:35.208935 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c2c8ddc-f82a-4cca-8a84-90c5713754cf","Type":"ContainerStarted","Data":"c73e4953491ec9f47f29267b9f26809a0789ba4fdcd8a63a9120ac77e00f3874"} Jan 05 22:13:35 crc kubenswrapper[5034]: I0105 22:13:35.239272 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.239251831 podStartE2EDuration="4.239251831s" podCreationTimestamp="2026-01-05 22:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:13:35.232734276 +0000 UTC m=+1307.604733715" watchObservedRunningTime="2026-01-05 22:13:35.239251831 +0000 UTC m=+1307.611251270" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.224651 5034 generic.go:334] "Generic (PLEG): container finished" podID="f80ad6fa-6594-4137-9771-2a82558004d8" containerID="b846b51363b5a5ea2e9dd2ff21f86882982bb42de16766fe3a86fe46a88bebb8" exitCode=0 Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.224725 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f80ad6fa-6594-4137-9771-2a82558004d8","Type":"ContainerDied","Data":"b846b51363b5a5ea2e9dd2ff21f86882982bb42de16766fe3a86fe46a88bebb8"} Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.393634 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.507758 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-scripts\") pod \"f80ad6fa-6594-4137-9771-2a82558004d8\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.507919 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-combined-ca-bundle\") pod \"f80ad6fa-6594-4137-9771-2a82558004d8\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.507978 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-config-data\") pod \"f80ad6fa-6594-4137-9771-2a82558004d8\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.508038 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-run-httpd\") pod \"f80ad6fa-6594-4137-9771-2a82558004d8\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.508104 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-sg-core-conf-yaml\") pod \"f80ad6fa-6594-4137-9771-2a82558004d8\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.508148 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmncp\" (UniqueName: \"kubernetes.io/projected/f80ad6fa-6594-4137-9771-2a82558004d8-kube-api-access-vmncp\") pod \"f80ad6fa-6594-4137-9771-2a82558004d8\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.508312 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-log-httpd\") pod \"f80ad6fa-6594-4137-9771-2a82558004d8\" (UID: \"f80ad6fa-6594-4137-9771-2a82558004d8\") " Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.509022 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f80ad6fa-6594-4137-9771-2a82558004d8" (UID: "f80ad6fa-6594-4137-9771-2a82558004d8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.509353 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f80ad6fa-6594-4137-9771-2a82558004d8" (UID: "f80ad6fa-6594-4137-9771-2a82558004d8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.509802 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.509836 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f80ad6fa-6594-4137-9771-2a82558004d8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.519199 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-scripts" (OuterVolumeSpecName: "scripts") pod "f80ad6fa-6594-4137-9771-2a82558004d8" (UID: "f80ad6fa-6594-4137-9771-2a82558004d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.521261 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80ad6fa-6594-4137-9771-2a82558004d8-kube-api-access-vmncp" (OuterVolumeSpecName: "kube-api-access-vmncp") pod "f80ad6fa-6594-4137-9771-2a82558004d8" (UID: "f80ad6fa-6594-4137-9771-2a82558004d8"). InnerVolumeSpecName "kube-api-access-vmncp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.568445 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f80ad6fa-6594-4137-9771-2a82558004d8" (UID: "f80ad6fa-6594-4137-9771-2a82558004d8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.612913 5034 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.612982 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmncp\" (UniqueName: \"kubernetes.io/projected/f80ad6fa-6594-4137-9771-2a82558004d8-kube-api-access-vmncp\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.612998 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.623409 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f80ad6fa-6594-4137-9771-2a82558004d8" (UID: "f80ad6fa-6594-4137-9771-2a82558004d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.642046 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-config-data" (OuterVolumeSpecName: "config-data") pod "f80ad6fa-6594-4137-9771-2a82558004d8" (UID: "f80ad6fa-6594-4137-9771-2a82558004d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.715474 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:36 crc kubenswrapper[5034]: I0105 22:13:36.715519 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f80ad6fa-6594-4137-9771-2a82558004d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.240894 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f80ad6fa-6594-4137-9771-2a82558004d8","Type":"ContainerDied","Data":"4bd99b7e6d367ba39fd3927834c91911432f1edb3bc7037671bac3372341e5a3"} Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.241449 5034 scope.go:117] "RemoveContainer" containerID="8ba76c9feefacb95617456c0e51dffc2fce6e5bb33fdac06f7b530c30750ad1e" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.241629 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.287846 5034 scope.go:117] "RemoveContainer" containerID="50c718b5e1e2b526f111e8c5d327c1056a7b720ce590ad8212f96567dd3d5f54" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.296742 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.314654 5034 scope.go:117] "RemoveContainer" containerID="128d715ccc89a51fb06f8b3512013f8ea7ce34fb44fb6f123464783c33827010" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.325457 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.348981 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:37 crc kubenswrapper[5034]: E0105 22:13:37.349740 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4764f8ba-949a-4792-9cd1-2aae9c0a7d92" containerName="mariadb-database-create" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.349758 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4764f8ba-949a-4792-9cd1-2aae9c0a7d92" containerName="mariadb-database-create" Jan 05 22:13:37 crc kubenswrapper[5034]: E0105 22:13:37.349774 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="sg-core" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.349781 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="sg-core" Jan 05 22:13:37 crc kubenswrapper[5034]: E0105 22:13:37.349798 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e640fc-0cfe-430b-9b7a-90c5d68e6b76" containerName="mariadb-account-create-update" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.349805 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e640fc-0cfe-430b-9b7a-90c5d68e6b76" containerName="mariadb-account-create-update" Jan 05 22:13:37 crc kubenswrapper[5034]: E0105 22:13:37.349831 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="proxy-httpd" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.349837 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="proxy-httpd" Jan 05 22:13:37 crc kubenswrapper[5034]: E0105 22:13:37.349855 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3" containerName="mariadb-account-create-update" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.349861 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3" containerName="mariadb-account-create-update" Jan 05 22:13:37 crc kubenswrapper[5034]: E0105 22:13:37.349873 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="ceilometer-central-agent" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.349878 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="ceilometer-central-agent" Jan 05 22:13:37 crc kubenswrapper[5034]: E0105 22:13:37.349886 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6235a7f3-12fc-455a-a4ba-a09957646334" containerName="mariadb-database-create" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.349891 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6235a7f3-12fc-455a-a4ba-a09957646334" containerName="mariadb-database-create" Jan 05 22:13:37 crc kubenswrapper[5034]: E0105 22:13:37.349910 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca996351-9e8b-45d0-91d2-7afc4c65f9cb" containerName="mariadb-account-create-update" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.349918 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca996351-9e8b-45d0-91d2-7afc4c65f9cb" containerName="mariadb-account-create-update" Jan 05 22:13:37 crc kubenswrapper[5034]: E0105 22:13:37.349929 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="ceilometer-notification-agent" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.349935 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="ceilometer-notification-agent" Jan 05 22:13:37 crc kubenswrapper[5034]: E0105 22:13:37.349944 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d0aa4e-b8be-479d-8583-bb3cd2a245f2" containerName="mariadb-database-create" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.349951 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d0aa4e-b8be-479d-8583-bb3cd2a245f2" containerName="mariadb-database-create" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.350222 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca996351-9e8b-45d0-91d2-7afc4c65f9cb" containerName="mariadb-account-create-update" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.350245 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e640fc-0cfe-430b-9b7a-90c5d68e6b76" containerName="mariadb-account-create-update" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.350261 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d0aa4e-b8be-479d-8583-bb3cd2a245f2" containerName="mariadb-database-create" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.350272 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3" containerName="mariadb-account-create-update" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.350282 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6235a7f3-12fc-455a-a4ba-a09957646334" containerName="mariadb-database-create" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.350290 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4764f8ba-949a-4792-9cd1-2aae9c0a7d92" containerName="mariadb-database-create" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.350305 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="ceilometer-central-agent" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.350324 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="ceilometer-notification-agent" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.350337 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="sg-core" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.350352 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" containerName="proxy-httpd" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.352576 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.353778 5034 scope.go:117] "RemoveContainer" containerID="b846b51363b5a5ea2e9dd2ff21f86882982bb42de16766fe3a86fe46a88bebb8" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.356337 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.357404 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.360661 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.530623 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64cnh\" (UniqueName: \"kubernetes.io/projected/1d529be0-0ce2-48fe-af94-a39104f40d3c-kube-api-access-64cnh\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.531056 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.531189 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-scripts\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.531376 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-config-data\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.531510 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.531669 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-run-httpd\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.531900 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-log-httpd\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.634280 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-log-httpd\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.634381 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64cnh\" (UniqueName: \"kubernetes.io/projected/1d529be0-0ce2-48fe-af94-a39104f40d3c-kube-api-access-64cnh\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.634432 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.634449 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-scripts\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.634475 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-config-data\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.634505 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.634533 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-run-httpd\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.635018 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-log-httpd\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.635053 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-run-httpd\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.641096 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-config-data\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.641767 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.642974 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-scripts\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.644719 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.655258 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64cnh\" (UniqueName: \"kubernetes.io/projected/1d529be0-0ce2-48fe-af94-a39104f40d3c-kube-api-access-64cnh\") pod \"ceilometer-0\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.704047 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:37 crc kubenswrapper[5034]: I0105 22:13:37.860659 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80ad6fa-6594-4137-9771-2a82558004d8" path="/var/lib/kubelet/pods/f80ad6fa-6594-4137-9771-2a82558004d8/volumes" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.009883 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-72xjf"] Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.011044 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.024051 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.025714 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zs4z6" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.025968 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.040826 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-72xjf"] Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.146490 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.146872 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drd79\" (UniqueName: \"kubernetes.io/projected/399b297f-2aeb-4859-b528-72ff3213bdcc-kube-api-access-drd79\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.147011 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-config-data\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.147149 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-scripts\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.249598 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-scripts\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.249696 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.249781 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drd79\" (UniqueName: \"kubernetes.io/projected/399b297f-2aeb-4859-b528-72ff3213bdcc-kube-api-access-drd79\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.249828 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-config-data\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.257104 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-scripts\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.266852 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.268729 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-config-data\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.274882 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drd79\" (UniqueName: \"kubernetes.io/projected/399b297f-2aeb-4859-b528-72ff3213bdcc-kube-api-access-drd79\") pod \"nova-cell0-conductor-db-sync-72xjf\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.327909 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.338503 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:13:38 crc kubenswrapper[5034]: I0105 22:13:38.990491 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-72xjf"] Jan 05 22:13:38 crc kubenswrapper[5034]: W0105 22:13:38.995068 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod399b297f_2aeb_4859_b528_72ff3213bdcc.slice/crio-0f7416cf6bd1dcb7226e3342078f20453e869e2c17b7294c63298560a012eeaa WatchSource:0}: Error finding container 0f7416cf6bd1dcb7226e3342078f20453e869e2c17b7294c63298560a012eeaa: Status 404 returned error can't find the container with id 0f7416cf6bd1dcb7226e3342078f20453e869e2c17b7294c63298560a012eeaa Jan 05 22:13:39 crc kubenswrapper[5034]: I0105 22:13:39.286421 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d529be0-0ce2-48fe-af94-a39104f40d3c","Type":"ContainerStarted","Data":"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e"} Jan 05 22:13:39 crc kubenswrapper[5034]: I0105 22:13:39.286467 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d529be0-0ce2-48fe-af94-a39104f40d3c","Type":"ContainerStarted","Data":"ffbb250013c8f1e048f1343b576c1d54443e645a080c6d813e2902acef0ad3fa"} Jan 05 22:13:39 crc kubenswrapper[5034]: I0105 22:13:39.288134 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-72xjf" event={"ID":"399b297f-2aeb-4859-b528-72ff3213bdcc","Type":"ContainerStarted","Data":"0f7416cf6bd1dcb7226e3342078f20453e869e2c17b7294c63298560a012eeaa"} Jan 05 22:13:40 crc kubenswrapper[5034]: I0105 22:13:40.302012 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d529be0-0ce2-48fe-af94-a39104f40d3c","Type":"ContainerStarted","Data":"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30"} Jan 05 22:13:41 crc kubenswrapper[5034]: I0105 22:13:41.319189 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d529be0-0ce2-48fe-af94-a39104f40d3c","Type":"ContainerStarted","Data":"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634"} Jan 05 22:13:41 crc kubenswrapper[5034]: I0105 22:13:41.459177 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:41 crc kubenswrapper[5034]: I0105 22:13:41.459250 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:41 crc kubenswrapper[5034]: I0105 22:13:41.504903 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:41 crc kubenswrapper[5034]: I0105 22:13:41.517307 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:41 crc kubenswrapper[5034]: I0105 22:13:41.624559 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 22:13:41 crc kubenswrapper[5034]: I0105 22:13:41.624612 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 22:13:41 crc kubenswrapper[5034]: I0105 22:13:41.662461 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 22:13:41 crc kubenswrapper[5034]: I0105 22:13:41.669660 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 22:13:42 crc kubenswrapper[5034]: I0105 22:13:42.331634 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:42 crc kubenswrapper[5034]: I0105 22:13:42.331705 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 22:13:42 crc kubenswrapper[5034]: I0105 22:13:42.331742 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 22:13:42 crc kubenswrapper[5034]: I0105 22:13:42.331753 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:44 crc kubenswrapper[5034]: I0105 22:13:44.500316 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 22:13:44 crc kubenswrapper[5034]: I0105 22:13:44.501052 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 22:13:44 crc kubenswrapper[5034]: I0105 22:13:44.542971 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:44 crc kubenswrapper[5034]: I0105 22:13:44.543119 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 22:13:44 crc kubenswrapper[5034]: I0105 22:13:44.555801 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 22:13:44 crc kubenswrapper[5034]: I0105 22:13:44.666811 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 22:13:48 crc kubenswrapper[5034]: I0105 22:13:48.400753 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d529be0-0ce2-48fe-af94-a39104f40d3c","Type":"ContainerStarted","Data":"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4"} Jan 05 22:13:48 crc kubenswrapper[5034]: I0105 22:13:48.401650 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 22:13:48 crc kubenswrapper[5034]: I0105 22:13:48.403118 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-72xjf" event={"ID":"399b297f-2aeb-4859-b528-72ff3213bdcc","Type":"ContainerStarted","Data":"c88a64be53b91c67e8e90903134af8ffc9d01789d94fcb6053bef72cb09a6760"} Jan 05 22:13:48 crc kubenswrapper[5034]: I0105 22:13:48.425463 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.981465762 podStartE2EDuration="11.425443688s" podCreationTimestamp="2026-01-05 22:13:37 +0000 UTC" firstStartedPulling="2026-01-05 22:13:38.33251888 +0000 UTC m=+1310.704518319" lastFinishedPulling="2026-01-05 22:13:47.776496806 +0000 UTC m=+1320.148496245" observedRunningTime="2026-01-05 22:13:48.42094386 +0000 UTC m=+1320.792943299" watchObservedRunningTime="2026-01-05 22:13:48.425443688 +0000 UTC m=+1320.797443127" Jan 05 22:13:48 crc kubenswrapper[5034]: I0105 22:13:48.447790 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-72xjf" podStartSLOduration=2.670045036 podStartE2EDuration="11.44775816s" podCreationTimestamp="2026-01-05 22:13:37 +0000 UTC" firstStartedPulling="2026-01-05 22:13:38.997919907 +0000 UTC m=+1311.369919356" lastFinishedPulling="2026-01-05 22:13:47.775633041 +0000 UTC m=+1320.147632480" observedRunningTime="2026-01-05 22:13:48.441889424 +0000 UTC m=+1320.813888863" watchObservedRunningTime="2026-01-05 22:13:48.44775816 +0000 UTC m=+1320.819757589" Jan 05 22:13:49 crc kubenswrapper[5034]: I0105 22:13:49.405362 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:50 crc kubenswrapper[5034]: I0105 22:13:50.422231 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="proxy-httpd" containerID="cri-o://c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4" gracePeriod=30 Jan 05 22:13:50 crc kubenswrapper[5034]: I0105 22:13:50.422245 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="ceilometer-notification-agent" containerID="cri-o://98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30" gracePeriod=30 Jan 05 22:13:50 crc kubenswrapper[5034]: I0105 22:13:50.422231 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="ceilometer-central-agent" containerID="cri-o://1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e" gracePeriod=30 Jan 05 22:13:50 crc kubenswrapper[5034]: I0105 22:13:50.422286 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="sg-core" containerID="cri-o://e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634" gracePeriod=30 Jan 05 22:13:50 crc kubenswrapper[5034]: I0105 22:13:50.469163 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:13:50 crc kubenswrapper[5034]: I0105 22:13:50.469921 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:13:50 crc kubenswrapper[5034]: I0105 22:13:50.470129 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:13:50 crc kubenswrapper[5034]: I0105 22:13:50.471329 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45da2bec73ffc166cc700c72e797b90c9621bfbc99e0234553fa898f473409e8"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:13:50 crc kubenswrapper[5034]: I0105 22:13:50.471540 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://45da2bec73ffc166cc700c72e797b90c9621bfbc99e0234553fa898f473409e8" gracePeriod=600 Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.331141 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.382859 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-combined-ca-bundle\") pod \"1d529be0-0ce2-48fe-af94-a39104f40d3c\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.383139 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-log-httpd\") pod \"1d529be0-0ce2-48fe-af94-a39104f40d3c\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.383172 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64cnh\" (UniqueName: \"kubernetes.io/projected/1d529be0-0ce2-48fe-af94-a39104f40d3c-kube-api-access-64cnh\") pod \"1d529be0-0ce2-48fe-af94-a39104f40d3c\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.383217 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-config-data\") pod \"1d529be0-0ce2-48fe-af94-a39104f40d3c\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.383237 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-scripts\") pod \"1d529be0-0ce2-48fe-af94-a39104f40d3c\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.383274 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-sg-core-conf-yaml\") pod \"1d529be0-0ce2-48fe-af94-a39104f40d3c\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.383314 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-run-httpd\") pod \"1d529be0-0ce2-48fe-af94-a39104f40d3c\" (UID: \"1d529be0-0ce2-48fe-af94-a39104f40d3c\") " Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.384066 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1d529be0-0ce2-48fe-af94-a39104f40d3c" (UID: "1d529be0-0ce2-48fe-af94-a39104f40d3c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.386757 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1d529be0-0ce2-48fe-af94-a39104f40d3c" (UID: "1d529be0-0ce2-48fe-af94-a39104f40d3c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.390662 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-scripts" (OuterVolumeSpecName: "scripts") pod "1d529be0-0ce2-48fe-af94-a39104f40d3c" (UID: "1d529be0-0ce2-48fe-af94-a39104f40d3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.394173 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d529be0-0ce2-48fe-af94-a39104f40d3c-kube-api-access-64cnh" (OuterVolumeSpecName: "kube-api-access-64cnh") pod "1d529be0-0ce2-48fe-af94-a39104f40d3c" (UID: "1d529be0-0ce2-48fe-af94-a39104f40d3c"). InnerVolumeSpecName "kube-api-access-64cnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.420208 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1d529be0-0ce2-48fe-af94-a39104f40d3c" (UID: "1d529be0-0ce2-48fe-af94-a39104f40d3c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434548 5034 generic.go:334] "Generic (PLEG): container finished" podID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerID="c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4" exitCode=0 Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434579 5034 generic.go:334] "Generic (PLEG): container finished" podID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerID="e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634" exitCode=2 Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434589 5034 generic.go:334] "Generic (PLEG): container finished" podID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerID="98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30" exitCode=0 Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434599 5034 generic.go:334] "Generic (PLEG): container finished" podID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerID="1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e" exitCode=0 Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434593 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d529be0-0ce2-48fe-af94-a39104f40d3c","Type":"ContainerDied","Data":"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4"} Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434680 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d529be0-0ce2-48fe-af94-a39104f40d3c","Type":"ContainerDied","Data":"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634"} Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434701 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d529be0-0ce2-48fe-af94-a39104f40d3c","Type":"ContainerDied","Data":"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30"} Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434719 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d529be0-0ce2-48fe-af94-a39104f40d3c","Type":"ContainerDied","Data":"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e"} Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434733 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d529be0-0ce2-48fe-af94-a39104f40d3c","Type":"ContainerDied","Data":"ffbb250013c8f1e048f1343b576c1d54443e645a080c6d813e2902acef0ad3fa"} Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434672 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.434754 5034 scope.go:117] "RemoveContainer" containerID="c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.441052 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="45da2bec73ffc166cc700c72e797b90c9621bfbc99e0234553fa898f473409e8" exitCode=0 Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.441111 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"45da2bec73ffc166cc700c72e797b90c9621bfbc99e0234553fa898f473409e8"} Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.441144 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f"} Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.462475 5034 scope.go:117] "RemoveContainer" containerID="e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.463800 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d529be0-0ce2-48fe-af94-a39104f40d3c" (UID: "1d529be0-0ce2-48fe-af94-a39104f40d3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.481832 5034 scope.go:117] "RemoveContainer" containerID="98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.485926 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.485954 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.485963 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64cnh\" (UniqueName: \"kubernetes.io/projected/1d529be0-0ce2-48fe-af94-a39104f40d3c-kube-api-access-64cnh\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.485974 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.485984 5034 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.486118 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d529be0-0ce2-48fe-af94-a39104f40d3c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.498623 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-config-data" (OuterVolumeSpecName: "config-data") pod "1d529be0-0ce2-48fe-af94-a39104f40d3c" (UID: "1d529be0-0ce2-48fe-af94-a39104f40d3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.516603 5034 scope.go:117] "RemoveContainer" containerID="1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.539755 5034 scope.go:117] "RemoveContainer" containerID="c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4" Jan 05 22:13:51 crc kubenswrapper[5034]: E0105 22:13:51.540452 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4\": container with ID starting with c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4 not found: ID does not exist" containerID="c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.540490 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4"} err="failed to get container status \"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4\": rpc error: code = NotFound desc = could not find container \"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4\": container with ID starting with c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.540523 5034 scope.go:117] "RemoveContainer" containerID="e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634" Jan 05 22:13:51 crc kubenswrapper[5034]: E0105 22:13:51.545362 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634\": container with ID starting with e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634 not found: ID does not exist" containerID="e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.545388 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634"} err="failed to get container status \"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634\": rpc error: code = NotFound desc = could not find container \"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634\": container with ID starting with e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.545405 5034 scope.go:117] "RemoveContainer" containerID="98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30" Jan 05 22:13:51 crc kubenswrapper[5034]: E0105 22:13:51.545825 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30\": container with ID starting with 98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30 not found: ID does not exist" containerID="98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.545896 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30"} err="failed to get container status \"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30\": rpc error: code = NotFound desc = could not find container \"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30\": container with ID starting with 98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.545933 5034 scope.go:117] "RemoveContainer" containerID="1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e" Jan 05 22:13:51 crc kubenswrapper[5034]: E0105 22:13:51.546624 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e\": container with ID starting with 1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e not found: ID does not exist" containerID="1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.546669 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e"} err="failed to get container status \"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e\": rpc error: code = NotFound desc = could not find container \"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e\": container with ID starting with 1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.546703 5034 scope.go:117] "RemoveContainer" containerID="c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.547102 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4"} err="failed to get container status \"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4\": rpc error: code = NotFound desc = could not find container \"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4\": container with ID starting with c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.547126 5034 scope.go:117] "RemoveContainer" containerID="e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.547465 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634"} err="failed to get container status \"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634\": rpc error: code = NotFound desc = could not find container \"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634\": container with ID starting with e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.547541 5034 scope.go:117] "RemoveContainer" containerID="98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.547766 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30"} err="failed to get container status \"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30\": rpc error: code = NotFound desc = could not find container \"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30\": container with ID starting with 98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.547789 5034 scope.go:117] "RemoveContainer" containerID="1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.548026 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e"} err="failed to get container status \"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e\": rpc error: code = NotFound desc = could not find container \"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e\": container with ID starting with 1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.548049 5034 scope.go:117] "RemoveContainer" containerID="c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.548313 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4"} err="failed to get container status \"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4\": rpc error: code = NotFound desc = could not find container \"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4\": container with ID starting with c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.548336 5034 scope.go:117] "RemoveContainer" containerID="e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.548711 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634"} err="failed to get container status \"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634\": rpc error: code = NotFound desc = could not find container \"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634\": container with ID starting with e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.548732 5034 scope.go:117] "RemoveContainer" containerID="98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.549063 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30"} err="failed to get container status \"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30\": rpc error: code = NotFound desc = could not find container \"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30\": container with ID starting with 98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.549128 5034 scope.go:117] "RemoveContainer" containerID="1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.549337 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e"} err="failed to get container status \"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e\": rpc error: code = NotFound desc = could not find container \"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e\": container with ID starting with 1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.549358 5034 scope.go:117] "RemoveContainer" containerID="c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.549561 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4"} err="failed to get container status \"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4\": rpc error: code = NotFound desc = could not find container \"c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4\": container with ID starting with c79bce3316870b75ff31a3eb966007f9fed243a8f95751562c0b5c041b7a79c4 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.549583 5034 scope.go:117] "RemoveContainer" containerID="e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.549817 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634"} err="failed to get container status \"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634\": rpc error: code = NotFound desc = could not find container \"e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634\": container with ID starting with e1ebfe1a0ff443428273ba189830079d4901cf010ae4b082668cb6f1f7403634 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.549842 5034 scope.go:117] "RemoveContainer" containerID="98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.550045 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30"} err="failed to get container status \"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30\": rpc error: code = NotFound desc = could not find container \"98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30\": container with ID starting with 98c1a2500d87936e3470cfe8ee0a050672879e30c0bbcea84da36e9ef569ee30 not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.550068 5034 scope.go:117] "RemoveContainer" containerID="1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.550305 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e"} err="failed to get container status \"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e\": rpc error: code = NotFound desc = could not find container \"1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e\": container with ID starting with 1a67becc289b6eab2de3e14bba5edd74b9231b2beb4cd0ac0ee3a9f4c6b6353e not found: ID does not exist" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.550331 5034 scope.go:117] "RemoveContainer" containerID="7aa61e9f5aaa409d4332d17291c1246e891073205f554c85d6e919f6906d1cd4" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.589184 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d529be0-0ce2-48fe-af94-a39104f40d3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.787271 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.805777 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.817661 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:51 crc kubenswrapper[5034]: E0105 22:13:51.818286 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="ceilometer-central-agent" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.818312 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="ceilometer-central-agent" Jan 05 22:13:51 crc kubenswrapper[5034]: E0105 22:13:51.818326 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="proxy-httpd" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.818340 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="proxy-httpd" Jan 05 22:13:51 crc kubenswrapper[5034]: E0105 22:13:51.818368 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="ceilometer-notification-agent" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.818377 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="ceilometer-notification-agent" Jan 05 22:13:51 crc kubenswrapper[5034]: E0105 22:13:51.818404 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="sg-core" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.818412 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="sg-core" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.818648 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="ceilometer-notification-agent" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.818675 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="ceilometer-central-agent" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.818694 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="sg-core" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.818710 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" containerName="proxy-httpd" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.821107 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.823626 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.823810 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.826056 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.866212 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d529be0-0ce2-48fe-af94-a39104f40d3c" path="/var/lib/kubelet/pods/1d529be0-0ce2-48fe-af94-a39104f40d3c/volumes" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.894727 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-config-data\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.894797 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.894879 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-scripts\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.894925 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-log-httpd\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.894990 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-run-httpd\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.895039 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkfdp\" (UniqueName: \"kubernetes.io/projected/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-kube-api-access-qkfdp\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.895099 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.996748 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.997693 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-scripts\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.997745 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-log-httpd\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.998028 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-run-httpd\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.998253 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-log-httpd\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.998330 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-run-httpd\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.998463 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkfdp\" (UniqueName: \"kubernetes.io/projected/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-kube-api-access-qkfdp\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.998872 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:51 crc kubenswrapper[5034]: I0105 22:13:51.999309 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-config-data\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:52 crc kubenswrapper[5034]: I0105 22:13:52.002232 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:52 crc kubenswrapper[5034]: I0105 22:13:52.002549 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-scripts\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:52 crc kubenswrapper[5034]: I0105 22:13:52.007105 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-config-data\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:52 crc kubenswrapper[5034]: I0105 22:13:52.007823 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:52 crc kubenswrapper[5034]: I0105 22:13:52.020280 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkfdp\" (UniqueName: \"kubernetes.io/projected/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-kube-api-access-qkfdp\") pod \"ceilometer-0\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " pod="openstack/ceilometer-0" Jan 05 22:13:52 crc kubenswrapper[5034]: I0105 22:13:52.154069 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:13:52 crc kubenswrapper[5034]: I0105 22:13:52.818213 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:53 crc kubenswrapper[5034]: I0105 22:13:53.543623 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092","Type":"ContainerStarted","Data":"66ee4d74c477df6ff13494d934afe50e233cd2b68079eab4b73e5a0a33c79858"} Jan 05 22:13:54 crc kubenswrapper[5034]: I0105 22:13:54.554748 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092","Type":"ContainerStarted","Data":"9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae"} Jan 05 22:13:54 crc kubenswrapper[5034]: I0105 22:13:54.555134 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092","Type":"ContainerStarted","Data":"d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5"} Jan 05 22:13:55 crc kubenswrapper[5034]: I0105 22:13:55.486900 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:13:55 crc kubenswrapper[5034]: I0105 22:13:55.569408 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092","Type":"ContainerStarted","Data":"d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d"} Jan 05 22:13:57 crc kubenswrapper[5034]: I0105 22:13:57.599934 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092","Type":"ContainerStarted","Data":"4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2"} Jan 05 22:13:57 crc kubenswrapper[5034]: I0105 22:13:57.600512 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 22:13:57 crc kubenswrapper[5034]: I0105 22:13:57.600253 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="sg-core" containerID="cri-o://d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d" gracePeriod=30 Jan 05 22:13:57 crc kubenswrapper[5034]: I0105 22:13:57.600189 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="ceilometer-central-agent" containerID="cri-o://d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5" gracePeriod=30 Jan 05 22:13:57 crc kubenswrapper[5034]: I0105 22:13:57.600326 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="proxy-httpd" containerID="cri-o://4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2" gracePeriod=30 Jan 05 22:13:57 crc kubenswrapper[5034]: I0105 22:13:57.600294 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="ceilometer-notification-agent" containerID="cri-o://9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae" gracePeriod=30 Jan 05 22:13:57 crc kubenswrapper[5034]: I0105 22:13:57.630619 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.552420242 podStartE2EDuration="6.630598319s" podCreationTimestamp="2026-01-05 22:13:51 +0000 UTC" firstStartedPulling="2026-01-05 22:13:52.836529274 +0000 UTC m=+1325.208528713" lastFinishedPulling="2026-01-05 22:13:56.914707351 +0000 UTC m=+1329.286706790" observedRunningTime="2026-01-05 22:13:57.62427405 +0000 UTC m=+1329.996273489" watchObservedRunningTime="2026-01-05 22:13:57.630598319 +0000 UTC m=+1330.002597768" Jan 05 22:13:58 crc kubenswrapper[5034]: I0105 22:13:58.614447 5034 generic.go:334] "Generic (PLEG): container finished" podID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerID="4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2" exitCode=0 Jan 05 22:13:58 crc kubenswrapper[5034]: I0105 22:13:58.614935 5034 generic.go:334] "Generic (PLEG): container finished" podID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerID="d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d" exitCode=2 Jan 05 22:13:58 crc kubenswrapper[5034]: I0105 22:13:58.614944 5034 generic.go:334] "Generic (PLEG): container finished" podID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerID="9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae" exitCode=0 Jan 05 22:13:58 crc kubenswrapper[5034]: I0105 22:13:58.614518 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092","Type":"ContainerDied","Data":"4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2"} Jan 05 22:13:58 crc kubenswrapper[5034]: I0105 22:13:58.614981 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092","Type":"ContainerDied","Data":"d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d"} Jan 05 22:13:58 crc kubenswrapper[5034]: I0105 22:13:58.614992 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092","Type":"ContainerDied","Data":"9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae"} Jan 05 22:13:59 crc kubenswrapper[5034]: I0105 22:13:59.624920 5034 generic.go:334] "Generic (PLEG): container finished" podID="399b297f-2aeb-4859-b528-72ff3213bdcc" containerID="c88a64be53b91c67e8e90903134af8ffc9d01789d94fcb6053bef72cb09a6760" exitCode=0 Jan 05 22:13:59 crc kubenswrapper[5034]: I0105 22:13:59.625022 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-72xjf" event={"ID":"399b297f-2aeb-4859-b528-72ff3213bdcc","Type":"ContainerDied","Data":"c88a64be53b91c67e8e90903134af8ffc9d01789d94fcb6053bef72cb09a6760"} Jan 05 22:14:00 crc kubenswrapper[5034]: I0105 22:14:00.978770 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.105357 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drd79\" (UniqueName: \"kubernetes.io/projected/399b297f-2aeb-4859-b528-72ff3213bdcc-kube-api-access-drd79\") pod \"399b297f-2aeb-4859-b528-72ff3213bdcc\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.105607 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-combined-ca-bundle\") pod \"399b297f-2aeb-4859-b528-72ff3213bdcc\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.105698 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-scripts\") pod \"399b297f-2aeb-4859-b528-72ff3213bdcc\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.105747 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-config-data\") pod \"399b297f-2aeb-4859-b528-72ff3213bdcc\" (UID: \"399b297f-2aeb-4859-b528-72ff3213bdcc\") " Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.112729 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-scripts" (OuterVolumeSpecName: "scripts") pod "399b297f-2aeb-4859-b528-72ff3213bdcc" (UID: "399b297f-2aeb-4859-b528-72ff3213bdcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.112979 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399b297f-2aeb-4859-b528-72ff3213bdcc-kube-api-access-drd79" (OuterVolumeSpecName: "kube-api-access-drd79") pod "399b297f-2aeb-4859-b528-72ff3213bdcc" (UID: "399b297f-2aeb-4859-b528-72ff3213bdcc"). InnerVolumeSpecName "kube-api-access-drd79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.137293 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "399b297f-2aeb-4859-b528-72ff3213bdcc" (UID: "399b297f-2aeb-4859-b528-72ff3213bdcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.140289 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-config-data" (OuterVolumeSpecName: "config-data") pod "399b297f-2aeb-4859-b528-72ff3213bdcc" (UID: "399b297f-2aeb-4859-b528-72ff3213bdcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.208559 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.208620 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.208633 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drd79\" (UniqueName: \"kubernetes.io/projected/399b297f-2aeb-4859-b528-72ff3213bdcc-kube-api-access-drd79\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.208656 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399b297f-2aeb-4859-b528-72ff3213bdcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.644691 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-72xjf" event={"ID":"399b297f-2aeb-4859-b528-72ff3213bdcc","Type":"ContainerDied","Data":"0f7416cf6bd1dcb7226e3342078f20453e869e2c17b7294c63298560a012eeaa"} Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.644734 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f7416cf6bd1dcb7226e3342078f20453e869e2c17b7294c63298560a012eeaa" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.645390 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-72xjf" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.749284 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 22:14:01 crc kubenswrapper[5034]: E0105 22:14:01.749698 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399b297f-2aeb-4859-b528-72ff3213bdcc" containerName="nova-cell0-conductor-db-sync" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.749711 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="399b297f-2aeb-4859-b528-72ff3213bdcc" containerName="nova-cell0-conductor-db-sync" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.749911 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="399b297f-2aeb-4859-b528-72ff3213bdcc" containerName="nova-cell0-conductor-db-sync" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.750561 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.756246 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zs4z6" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.756687 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.763794 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.819349 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.819804 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.819890 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpq9m\" (UniqueName: \"kubernetes.io/projected/52dac0d7-1025-49a8-8130-1f0d5050331c-kube-api-access-wpq9m\") pod \"nova-cell0-conductor-0\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.921945 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.922099 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.922118 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpq9m\" (UniqueName: \"kubernetes.io/projected/52dac0d7-1025-49a8-8130-1f0d5050331c-kube-api-access-wpq9m\") pod \"nova-cell0-conductor-0\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.930544 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.931794 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:01 crc kubenswrapper[5034]: I0105 22:14:01.945699 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpq9m\" (UniqueName: \"kubernetes.io/projected/52dac0d7-1025-49a8-8130-1f0d5050331c-kube-api-access-wpq9m\") pod \"nova-cell0-conductor-0\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:02 crc kubenswrapper[5034]: I0105 22:14:02.074395 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:02 crc kubenswrapper[5034]: I0105 22:14:02.590784 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 22:14:02 crc kubenswrapper[5034]: W0105 22:14:02.602352 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52dac0d7_1025_49a8_8130_1f0d5050331c.slice/crio-e8b42db5670e156feb9e636459875f88758809b0ecc2c07dc235fd21ebe04537 WatchSource:0}: Error finding container e8b42db5670e156feb9e636459875f88758809b0ecc2c07dc235fd21ebe04537: Status 404 returned error can't find the container with id e8b42db5670e156feb9e636459875f88758809b0ecc2c07dc235fd21ebe04537 Jan 05 22:14:02 crc kubenswrapper[5034]: I0105 22:14:02.667111 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"52dac0d7-1025-49a8-8130-1f0d5050331c","Type":"ContainerStarted","Data":"e8b42db5670e156feb9e636459875f88758809b0ecc2c07dc235fd21ebe04537"} Jan 05 22:14:03 crc kubenswrapper[5034]: I0105 22:14:03.678474 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"52dac0d7-1025-49a8-8130-1f0d5050331c","Type":"ContainerStarted","Data":"32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8"} Jan 05 22:14:03 crc kubenswrapper[5034]: I0105 22:14:03.679214 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:03 crc kubenswrapper[5034]: I0105 22:14:03.696359 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.6963402050000003 podStartE2EDuration="2.696340205s" podCreationTimestamp="2026-01-05 22:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:03.694590216 +0000 UTC m=+1336.066589645" watchObservedRunningTime="2026-01-05 22:14:03.696340205 +0000 UTC m=+1336.068339644" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.718284 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.722355 5034 generic.go:334] "Generic (PLEG): container finished" podID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerID="d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5" exitCode=0 Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.723492 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092","Type":"ContainerDied","Data":"d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5"} Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.723524 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092","Type":"ContainerDied","Data":"66ee4d74c477df6ff13494d934afe50e233cd2b68079eab4b73e5a0a33c79858"} Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.723542 5034 scope.go:117] "RemoveContainer" containerID="4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.768578 5034 scope.go:117] "RemoveContainer" containerID="d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.787786 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-log-httpd\") pod \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.787841 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-config-data\") pod \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.787896 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkfdp\" (UniqueName: \"kubernetes.io/projected/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-kube-api-access-qkfdp\") pod \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.787922 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-combined-ca-bundle\") pod \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.787993 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-run-httpd\") pod \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.788161 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-sg-core-conf-yaml\") pod \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.788200 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-scripts\") pod \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\" (UID: \"fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092\") " Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.789734 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" (UID: "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.789814 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" (UID: "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.799906 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-scripts" (OuterVolumeSpecName: "scripts") pod "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" (UID: "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.808534 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-kube-api-access-qkfdp" (OuterVolumeSpecName: "kube-api-access-qkfdp") pod "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" (UID: "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092"). InnerVolumeSpecName "kube-api-access-qkfdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.846306 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" (UID: "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.875727 5034 scope.go:117] "RemoveContainer" containerID="9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.909282 5034 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.909321 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.909330 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.909339 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkfdp\" (UniqueName: \"kubernetes.io/projected/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-kube-api-access-qkfdp\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.909349 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.910506 5034 scope.go:117] "RemoveContainer" containerID="d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.956328 5034 scope.go:117] "RemoveContainer" containerID="4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2" Jan 05 22:14:04 crc kubenswrapper[5034]: E0105 22:14:04.965298 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2\": container with ID starting with 4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2 not found: ID does not exist" containerID="4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.965357 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2"} err="failed to get container status \"4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2\": rpc error: code = NotFound desc = could not find container \"4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2\": container with ID starting with 4a1c6eddd1b181ef067ea5edf2d102bb2af4aacdb423f2aa52d69b6463d41ed2 not found: ID does not exist" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.965390 5034 scope.go:117] "RemoveContainer" containerID="d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d" Jan 05 22:14:04 crc kubenswrapper[5034]: E0105 22:14:04.969775 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d\": container with ID starting with d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d not found: ID does not exist" containerID="d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.969855 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d"} err="failed to get container status \"d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d\": rpc error: code = NotFound desc = could not find container \"d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d\": container with ID starting with d5636b8f57afe0af0c28050324a37f467c1141ea12c614ee55df85d45e1d739d not found: ID does not exist" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.969896 5034 scope.go:117] "RemoveContainer" containerID="9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae" Jan 05 22:14:04 crc kubenswrapper[5034]: E0105 22:14:04.970653 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae\": container with ID starting with 9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae not found: ID does not exist" containerID="9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.970703 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae"} err="failed to get container status \"9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae\": rpc error: code = NotFound desc = could not find container \"9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae\": container with ID starting with 9d66fa87db2b99bffb1f8c6344dab5fd6608e608e86ea21fa91ee3357be947ae not found: ID does not exist" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.970742 5034 scope.go:117] "RemoveContainer" containerID="d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.972180 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" (UID: "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:04 crc kubenswrapper[5034]: E0105 22:14:04.974476 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5\": container with ID starting with d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5 not found: ID does not exist" containerID="d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.974519 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5"} err="failed to get container status \"d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5\": rpc error: code = NotFound desc = could not find container \"d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5\": container with ID starting with d640889265daeb4ce6d25c362086e3aff89bfc52fb6bcf5b17e98e9aafbaa4d5 not found: ID does not exist" Jan 05 22:14:04 crc kubenswrapper[5034]: I0105 22:14:04.985891 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-config-data" (OuterVolumeSpecName: "config-data") pod "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" (UID: "fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.011716 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.011755 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.732147 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.767102 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.775923 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.793152 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:05 crc kubenswrapper[5034]: E0105 22:14:05.793579 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="ceilometer-central-agent" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.793600 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="ceilometer-central-agent" Jan 05 22:14:05 crc kubenswrapper[5034]: E0105 22:14:05.793620 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="ceilometer-notification-agent" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.793629 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="ceilometer-notification-agent" Jan 05 22:14:05 crc kubenswrapper[5034]: E0105 22:14:05.793657 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="sg-core" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.793665 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="sg-core" Jan 05 22:14:05 crc kubenswrapper[5034]: E0105 22:14:05.793684 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="proxy-httpd" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.793692 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="proxy-httpd" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.793909 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="ceilometer-central-agent" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.793940 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="ceilometer-notification-agent" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.793954 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="sg-core" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.793969 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" containerName="proxy-httpd" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.795783 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.798921 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.799004 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.813602 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.857096 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092" path="/var/lib/kubelet/pods/fc946fb3-f3e6-4b7d-a595-2e2f1c1f4092/volumes" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.926490 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-run-httpd\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.926549 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.926629 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-scripts\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.926659 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-config-data\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.926686 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.926727 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-log-httpd\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:05 crc kubenswrapper[5034]: I0105 22:14:05.926763 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f2lr\" (UniqueName: \"kubernetes.io/projected/58cd44c4-477b-449c-820d-33de1ef0dba1-kube-api-access-4f2lr\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.028832 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-scripts\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.028885 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-config-data\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.028924 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.028980 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-log-httpd\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.029015 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f2lr\" (UniqueName: \"kubernetes.io/projected/58cd44c4-477b-449c-820d-33de1ef0dba1-kube-api-access-4f2lr\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.029125 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-run-httpd\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.029155 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.029503 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-log-httpd\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.029752 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-run-httpd\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.033539 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.033754 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.033960 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-scripts\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.038250 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-config-data\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.062944 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f2lr\" (UniqueName: \"kubernetes.io/projected/58cd44c4-477b-449c-820d-33de1ef0dba1-kube-api-access-4f2lr\") pod \"ceilometer-0\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.113135 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.626664 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:06 crc kubenswrapper[5034]: W0105 22:14:06.631373 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58cd44c4_477b_449c_820d_33de1ef0dba1.slice/crio-44081a168988efbee913bf8e8c0173a3f1cf1e34d97e9c806ac39c7d2a29ad13 WatchSource:0}: Error finding container 44081a168988efbee913bf8e8c0173a3f1cf1e34d97e9c806ac39c7d2a29ad13: Status 404 returned error can't find the container with id 44081a168988efbee913bf8e8c0173a3f1cf1e34d97e9c806ac39c7d2a29ad13 Jan 05 22:14:06 crc kubenswrapper[5034]: I0105 22:14:06.740868 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cd44c4-477b-449c-820d-33de1ef0dba1","Type":"ContainerStarted","Data":"44081a168988efbee913bf8e8c0173a3f1cf1e34d97e9c806ac39c7d2a29ad13"} Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.105715 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.705406 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ptccl"] Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.707726 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.711864 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.711945 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.720947 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ptccl"] Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.773721 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cd44c4-477b-449c-820d-33de1ef0dba1","Type":"ContainerStarted","Data":"f875cd7858cfa25812f7e00c47f3309348048d27de4fe4dbb9d4560d0d332425"} Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.871415 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-config-data\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.871968 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-scripts\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.872014 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.872045 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvl8p\" (UniqueName: \"kubernetes.io/projected/fea79208-89f2-486d-830a-d7ab3bab3342-kube-api-access-hvl8p\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.927312 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.929431 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.935020 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.964108 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.973904 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-scripts\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.973982 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.974007 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvl8p\" (UniqueName: \"kubernetes.io/projected/fea79208-89f2-486d-830a-d7ab3bab3342-kube-api-access-hvl8p\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:07 crc kubenswrapper[5034]: I0105 22:14:07.974098 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-config-data\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.004754 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-config-data\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.006545 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.022691 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-scripts\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.027880 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvl8p\" (UniqueName: \"kubernetes.io/projected/fea79208-89f2-486d-830a-d7ab3bab3342-kube-api-access-hvl8p\") pod \"nova-cell0-cell-mapping-ptccl\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.051188 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.079370 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20991e9c-6454-4881-9e4a-b314f666f34e-logs\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.079457 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.079493 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-config-data\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.079556 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb94w\" (UniqueName: \"kubernetes.io/projected/20991e9c-6454-4881-9e4a-b314f666f34e-kube-api-access-pb94w\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.114430 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.115835 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.119719 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.193213 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-config-data\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.193365 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb6tb\" (UniqueName: \"kubernetes.io/projected/40fdb124-8bf9-40af-ac43-5d6c1de9a948-kube-api-access-qb6tb\") pod \"nova-cell1-novncproxy-0\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.193464 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb94w\" (UniqueName: \"kubernetes.io/projected/20991e9c-6454-4881-9e4a-b314f666f34e-kube-api-access-pb94w\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.193784 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.193807 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.193940 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20991e9c-6454-4881-9e4a-b314f666f34e-logs\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.194042 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.203110 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20991e9c-6454-4881-9e4a-b314f666f34e-logs\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.205857 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.218300 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.219901 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.226585 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-config-data\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.252812 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.272376 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb94w\" (UniqueName: \"kubernetes.io/projected/20991e9c-6454-4881-9e4a-b314f666f34e-kube-api-access-pb94w\") pod \"nova-api-0\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.283174 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.292042 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.300089 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.300137 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.300188 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.300227 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-config-data\") pod \"nova-scheduler-0\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.300292 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb6tb\" (UniqueName: \"kubernetes.io/projected/40fdb124-8bf9-40af-ac43-5d6c1de9a948-kube-api-access-qb6tb\") pod \"nova-cell1-novncproxy-0\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.300391 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cl8v\" (UniqueName: \"kubernetes.io/projected/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-kube-api-access-6cl8v\") pod \"nova-scheduler-0\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.304850 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.307871 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.358154 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.360528 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.388167 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.392849 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.393740 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb6tb\" (UniqueName: \"kubernetes.io/projected/40fdb124-8bf9-40af-ac43-5d6c1de9a948-kube-api-access-qb6tb\") pod \"nova-cell1-novncproxy-0\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.397056 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.402816 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cl8v\" (UniqueName: \"kubernetes.io/projected/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-kube-api-access-6cl8v\") pod \"nova-scheduler-0\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.403247 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.403321 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-config-data\") pod \"nova-scheduler-0\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.408930 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-config-data\") pod \"nova-scheduler-0\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.421614 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.509646 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.509759 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67aa8cdc-bdff-4d44-a99b-4135eda9265a-logs\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.509801 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdvq8\" (UniqueName: \"kubernetes.io/projected/67aa8cdc-bdff-4d44-a99b-4135eda9265a-kube-api-access-hdvq8\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.509822 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-config-data\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.540803 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cl8v\" (UniqueName: \"kubernetes.io/projected/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-kube-api-access-6cl8v\") pod \"nova-scheduler-0\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.551975 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.554705 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ccvsh"] Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.558850 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.602046 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ccvsh"] Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.611998 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.612696 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67aa8cdc-bdff-4d44-a99b-4135eda9265a-logs\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.612755 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdvq8\" (UniqueName: \"kubernetes.io/projected/67aa8cdc-bdff-4d44-a99b-4135eda9265a-kube-api-access-hdvq8\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.612781 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-config-data\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.613438 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67aa8cdc-bdff-4d44-a99b-4135eda9265a-logs\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.617610 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.627626 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-config-data\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.660679 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdvq8\" (UniqueName: \"kubernetes.io/projected/67aa8cdc-bdff-4d44-a99b-4135eda9265a-kube-api-access-hdvq8\") pod \"nova-metadata-0\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.714735 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.714853 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-config\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.714910 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.714944 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf4pb\" (UniqueName: \"kubernetes.io/projected/8de19bd3-39c9-47d0-bb6c-4bd536d54611-kube-api-access-hf4pb\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.714970 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.714997 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.752368 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.793543 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.916459 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.917044 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-config\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.917680 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.917739 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf4pb\" (UniqueName: \"kubernetes.io/projected/8de19bd3-39c9-47d0-bb6c-4bd536d54611-kube-api-access-hf4pb\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.917958 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.918010 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.928423 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.943271 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-config\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.943336 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.944007 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.944582 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.963990 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ptccl"] Jan 05 22:14:08 crc kubenswrapper[5034]: I0105 22:14:08.966073 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf4pb\" (UniqueName: \"kubernetes.io/projected/8de19bd3-39c9-47d0-bb6c-4bd536d54611-kube-api-access-hf4pb\") pod \"dnsmasq-dns-557bbc7df7-ccvsh\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.229096 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.294620 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:14:09 crc kubenswrapper[5034]: W0105 22:14:09.328222 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40fdb124_8bf9_40af_ac43_5d6c1de9a948.slice/crio-55b949b4ad14ccfd428693a959ddb9034196046e156becc15b214ad2bc24f465 WatchSource:0}: Error finding container 55b949b4ad14ccfd428693a959ddb9034196046e156becc15b214ad2bc24f465: Status 404 returned error can't find the container with id 55b949b4ad14ccfd428693a959ddb9034196046e156becc15b214ad2bc24f465 Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.494622 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sh4vf"] Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.496752 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.501247 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.507700 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.511946 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sh4vf"] Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.593738 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.659902 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-scripts\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.659993 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnlbh\" (UniqueName: \"kubernetes.io/projected/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-kube-api-access-gnlbh\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.660028 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.661982 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-config-data\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.673991 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.765019 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-config-data\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.765216 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-scripts\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.765261 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnlbh\" (UniqueName: \"kubernetes.io/projected/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-kube-api-access-gnlbh\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.765290 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.778789 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.781256 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-config-data\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.784504 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-scripts\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.788442 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnlbh\" (UniqueName: \"kubernetes.io/projected/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-kube-api-access-gnlbh\") pod \"nova-cell1-conductor-db-sync-sh4vf\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.830451 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.861477 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cd44c4-477b-449c-820d-33de1ef0dba1","Type":"ContainerStarted","Data":"5b91701d06e28e42ec9fc0c9972bb0d07b952d7c7701630f2489b5a58b251522"} Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.861526 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.861543 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20991e9c-6454-4881-9e4a-b314f666f34e","Type":"ContainerStarted","Data":"399f14d52c23c6bf0a1061b3b4d5c6eda31134681bc2ae69ff42c0b3b505d821"} Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.875947 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67aa8cdc-bdff-4d44-a99b-4135eda9265a","Type":"ContainerStarted","Data":"31bfdce9395d4cbdb86b5e1d31566ca37bbd6c1b1cdb0425b46e8422fa7f9d08"} Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.884943 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"40fdb124-8bf9-40af-ac43-5d6c1de9a948","Type":"ContainerStarted","Data":"55b949b4ad14ccfd428693a959ddb9034196046e156becc15b214ad2bc24f465"} Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.891643 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ptccl" event={"ID":"fea79208-89f2-486d-830a-d7ab3bab3342","Type":"ContainerStarted","Data":"02eccd3dc4c2053f38511d134bd4c631260cba004f866c32569a8710aa27ac3d"} Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.891688 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ptccl" event={"ID":"fea79208-89f2-486d-830a-d7ab3bab3342","Type":"ContainerStarted","Data":"050b7a19e2cb743bc3e3f9e79d0fc19b046bc5020d3e273810811f3e07415d70"} Jan 05 22:14:09 crc kubenswrapper[5034]: I0105 22:14:09.933031 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ptccl" podStartSLOduration=2.9330037730000003 podStartE2EDuration="2.933003773s" podCreationTimestamp="2026-01-05 22:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:09.924670427 +0000 UTC m=+1342.296669866" watchObservedRunningTime="2026-01-05 22:14:09.933003773 +0000 UTC m=+1342.305003212" Jan 05 22:14:10 crc kubenswrapper[5034]: I0105 22:14:10.084146 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ccvsh"] Jan 05 22:14:10 crc kubenswrapper[5034]: W0105 22:14:10.108668 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de19bd3_39c9_47d0_bb6c_4bd536d54611.slice/crio-8df1f628499a66cdfb5b815a4ae03ff3fea633b2cb2063fa50a9eff9192a42bf WatchSource:0}: Error finding container 8df1f628499a66cdfb5b815a4ae03ff3fea633b2cb2063fa50a9eff9192a42bf: Status 404 returned error can't find the container with id 8df1f628499a66cdfb5b815a4ae03ff3fea633b2cb2063fa50a9eff9192a42bf Jan 05 22:14:10 crc kubenswrapper[5034]: I0105 22:14:10.560550 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sh4vf"] Jan 05 22:14:10 crc kubenswrapper[5034]: I0105 22:14:10.912878 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sh4vf" event={"ID":"9b64a7d9-a1a5-4d7e-9012-c770d15f4267","Type":"ContainerStarted","Data":"a6d324dd629efd7f019950e0592d1d0f6d8e5e5ce0db7958c4e29f405f2fed9b"} Jan 05 22:14:10 crc kubenswrapper[5034]: I0105 22:14:10.919147 5034 generic.go:334] "Generic (PLEG): container finished" podID="8de19bd3-39c9-47d0-bb6c-4bd536d54611" containerID="41a7d2da4e0c3dbd8ff0015cadc1c9dfc7a7b5f6255e62c869d780dc99f63b99" exitCode=0 Jan 05 22:14:10 crc kubenswrapper[5034]: I0105 22:14:10.919217 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" event={"ID":"8de19bd3-39c9-47d0-bb6c-4bd536d54611","Type":"ContainerDied","Data":"41a7d2da4e0c3dbd8ff0015cadc1c9dfc7a7b5f6255e62c869d780dc99f63b99"} Jan 05 22:14:10 crc kubenswrapper[5034]: I0105 22:14:10.919249 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" event={"ID":"8de19bd3-39c9-47d0-bb6c-4bd536d54611","Type":"ContainerStarted","Data":"8df1f628499a66cdfb5b815a4ae03ff3fea633b2cb2063fa50a9eff9192a42bf"} Jan 05 22:14:10 crc kubenswrapper[5034]: I0105 22:14:10.937064 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cd44c4-477b-449c-820d-33de1ef0dba1","Type":"ContainerStarted","Data":"6dc7cc9893d00a09bfca4e93af22739fce7496bbdb2a3f40335dada1b451dafd"} Jan 05 22:14:10 crc kubenswrapper[5034]: I0105 22:14:10.951357 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a66807a4-0e6f-4cfb-aab8-7624a2874a4a","Type":"ContainerStarted","Data":"c4a1bf0c417d9f58af2fed432e5ad6048e86b66785fb2dd37ee76f5f0ed57e86"} Jan 05 22:14:11 crc kubenswrapper[5034]: I0105 22:14:11.988108 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sh4vf" event={"ID":"9b64a7d9-a1a5-4d7e-9012-c770d15f4267","Type":"ContainerStarted","Data":"019162e89465dcdc97538c62463c0686f10dc17589bc7886a23b68530f20cafc"} Jan 05 22:14:11 crc kubenswrapper[5034]: I0105 22:14:11.997462 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" event={"ID":"8de19bd3-39c9-47d0-bb6c-4bd536d54611","Type":"ContainerStarted","Data":"e091e99980c0aae7954373c565cf09ddcf6970c6038cdd6df72478e3465b3830"} Jan 05 22:14:11 crc kubenswrapper[5034]: I0105 22:14:11.997717 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:12 crc kubenswrapper[5034]: I0105 22:14:12.029537 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-sh4vf" podStartSLOduration=3.029513347 podStartE2EDuration="3.029513347s" podCreationTimestamp="2026-01-05 22:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:12.022580121 +0000 UTC m=+1344.394579570" watchObservedRunningTime="2026-01-05 22:14:12.029513347 +0000 UTC m=+1344.401512786" Jan 05 22:14:12 crc kubenswrapper[5034]: I0105 22:14:12.049566 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" podStartSLOduration=4.049508464 podStartE2EDuration="4.049508464s" podCreationTimestamp="2026-01-05 22:14:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:12.044472711 +0000 UTC m=+1344.416472150" watchObservedRunningTime="2026-01-05 22:14:12.049508464 +0000 UTC m=+1344.421507903" Jan 05 22:14:13 crc kubenswrapper[5034]: I0105 22:14:13.851500 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:14:13 crc kubenswrapper[5034]: I0105 22:14:13.872916 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.046130 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a66807a4-0e6f-4cfb-aab8-7624a2874a4a","Type":"ContainerStarted","Data":"4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50"} Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.048444 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20991e9c-6454-4881-9e4a-b314f666f34e","Type":"ContainerStarted","Data":"257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857"} Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.048474 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20991e9c-6454-4881-9e4a-b314f666f34e","Type":"ContainerStarted","Data":"40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f"} Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.050289 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67aa8cdc-bdff-4d44-a99b-4135eda9265a","Type":"ContainerStarted","Data":"315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0"} Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.050318 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67aa8cdc-bdff-4d44-a99b-4135eda9265a","Type":"ContainerStarted","Data":"01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380"} Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.050468 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" containerName="nova-metadata-log" containerID="cri-o://01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380" gracePeriod=30 Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.050530 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" containerName="nova-metadata-metadata" containerID="cri-o://315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0" gracePeriod=30 Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.056379 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"40fdb124-8bf9-40af-ac43-5d6c1de9a948","Type":"ContainerStarted","Data":"32aececbfbae1f5b2913e1b1a33025ebd94532b11413e617c74ff37f88258f15"} Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.056598 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="40fdb124-8bf9-40af-ac43-5d6c1de9a948" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://32aececbfbae1f5b2913e1b1a33025ebd94532b11413e617c74ff37f88258f15" gracePeriod=30 Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.064185 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cd44c4-477b-449c-820d-33de1ef0dba1","Type":"ContainerStarted","Data":"1cd1d27f410ceb0b6bf42fbbb1890ee47957e6d4db04e579e0044c68320a7d35"} Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.065507 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.081947 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.855107073 podStartE2EDuration="8.081922875s" podCreationTimestamp="2026-01-05 22:14:08 +0000 UTC" firstStartedPulling="2026-01-05 22:14:09.917049331 +0000 UTC m=+1342.289048770" lastFinishedPulling="2026-01-05 22:14:15.143865133 +0000 UTC m=+1347.515864572" observedRunningTime="2026-01-05 22:14:16.071191841 +0000 UTC m=+1348.443191280" watchObservedRunningTime="2026-01-05 22:14:16.081922875 +0000 UTC m=+1348.453922314" Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.108005 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.554804696 podStartE2EDuration="9.107975763s" podCreationTimestamp="2026-01-05 22:14:07 +0000 UTC" firstStartedPulling="2026-01-05 22:14:09.592312672 +0000 UTC m=+1341.964312111" lastFinishedPulling="2026-01-05 22:14:15.145483739 +0000 UTC m=+1347.517483178" observedRunningTime="2026-01-05 22:14:16.104735781 +0000 UTC m=+1348.476735220" watchObservedRunningTime="2026-01-05 22:14:16.107975763 +0000 UTC m=+1348.479975272" Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.127483 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.346136835 podStartE2EDuration="8.127450934s" podCreationTimestamp="2026-01-05 22:14:08 +0000 UTC" firstStartedPulling="2026-01-05 22:14:09.342365262 +0000 UTC m=+1341.714364701" lastFinishedPulling="2026-01-05 22:14:15.123679361 +0000 UTC m=+1347.495678800" observedRunningTime="2026-01-05 22:14:16.120639871 +0000 UTC m=+1348.492639300" watchObservedRunningTime="2026-01-05 22:14:16.127450934 +0000 UTC m=+1348.499450373" Jan 05 22:14:16 crc kubenswrapper[5034]: I0105 22:14:16.151001 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.633112747 podStartE2EDuration="11.150980811s" podCreationTimestamp="2026-01-05 22:14:05 +0000 UTC" firstStartedPulling="2026-01-05 22:14:06.634417898 +0000 UTC m=+1339.006417337" lastFinishedPulling="2026-01-05 22:14:15.152285972 +0000 UTC m=+1347.524285401" observedRunningTime="2026-01-05 22:14:16.145133355 +0000 UTC m=+1348.517132794" watchObservedRunningTime="2026-01-05 22:14:16.150980811 +0000 UTC m=+1348.522980250" Jan 05 22:14:17 crc kubenswrapper[5034]: I0105 22:14:17.077939 5034 generic.go:334] "Generic (PLEG): container finished" podID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" containerID="01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380" exitCode=143 Jan 05 22:14:17 crc kubenswrapper[5034]: I0105 22:14:17.078050 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67aa8cdc-bdff-4d44-a99b-4135eda9265a","Type":"ContainerDied","Data":"01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380"} Jan 05 22:14:18 crc kubenswrapper[5034]: I0105 22:14:18.397738 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:18 crc kubenswrapper[5034]: I0105 22:14:18.554584 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 22:14:18 crc kubenswrapper[5034]: I0105 22:14:18.554669 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 22:14:18 crc kubenswrapper[5034]: I0105 22:14:18.754252 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 05 22:14:18 crc kubenswrapper[5034]: I0105 22:14:18.754304 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 22:14:18 crc kubenswrapper[5034]: I0105 22:14:18.793967 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 05 22:14:18 crc kubenswrapper[5034]: I0105 22:14:18.794938 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 22:14:18 crc kubenswrapper[5034]: I0105 22:14:18.795002 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 22:14:18 crc kubenswrapper[5034]: I0105 22:14:18.830399 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.369158484 podStartE2EDuration="10.830366816s" podCreationTimestamp="2026-01-05 22:14:08 +0000 UTC" firstStartedPulling="2026-01-05 22:14:09.684277377 +0000 UTC m=+1342.056276816" lastFinishedPulling="2026-01-05 22:14:15.145485709 +0000 UTC m=+1347.517485148" observedRunningTime="2026-01-05 22:14:16.195105541 +0000 UTC m=+1348.567104990" watchObservedRunningTime="2026-01-05 22:14:18.830366816 +0000 UTC m=+1351.202366255" Jan 05 22:14:19 crc kubenswrapper[5034]: I0105 22:14:19.132574 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 05 22:14:19 crc kubenswrapper[5034]: I0105 22:14:19.232315 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:19 crc kubenswrapper[5034]: I0105 22:14:19.345975 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-c5krh"] Jan 05 22:14:19 crc kubenswrapper[5034]: I0105 22:14:19.346314 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" podUID="1c240b88-16fc-469e-b12f-64f70d8cde97" containerName="dnsmasq-dns" containerID="cri-o://1bed29012010123310b4525d0409cb3ef6a3c05ba563c641a31b318cc916ccd9" gracePeriod=10 Jan 05 22:14:19 crc kubenswrapper[5034]: I0105 22:14:19.638336 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 22:14:19 crc kubenswrapper[5034]: I0105 22:14:19.638486 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.123320 5034 generic.go:334] "Generic (PLEG): container finished" podID="fea79208-89f2-486d-830a-d7ab3bab3342" containerID="02eccd3dc4c2053f38511d134bd4c631260cba004f866c32569a8710aa27ac3d" exitCode=0 Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.123429 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ptccl" event={"ID":"fea79208-89f2-486d-830a-d7ab3bab3342","Type":"ContainerDied","Data":"02eccd3dc4c2053f38511d134bd4c631260cba004f866c32569a8710aa27ac3d"} Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.126816 5034 generic.go:334] "Generic (PLEG): container finished" podID="1c240b88-16fc-469e-b12f-64f70d8cde97" containerID="1bed29012010123310b4525d0409cb3ef6a3c05ba563c641a31b318cc916ccd9" exitCode=0 Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.126861 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" event={"ID":"1c240b88-16fc-469e-b12f-64f70d8cde97","Type":"ContainerDied","Data":"1bed29012010123310b4525d0409cb3ef6a3c05ba563c641a31b318cc916ccd9"} Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.368528 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.486186 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-swift-storage-0\") pod \"1c240b88-16fc-469e-b12f-64f70d8cde97\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.486654 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-sb\") pod \"1c240b88-16fc-469e-b12f-64f70d8cde97\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.486979 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-nb\") pod \"1c240b88-16fc-469e-b12f-64f70d8cde97\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.487174 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-config\") pod \"1c240b88-16fc-469e-b12f-64f70d8cde97\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.487365 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcbhf\" (UniqueName: \"kubernetes.io/projected/1c240b88-16fc-469e-b12f-64f70d8cde97-kube-api-access-kcbhf\") pod \"1c240b88-16fc-469e-b12f-64f70d8cde97\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.487581 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-svc\") pod \"1c240b88-16fc-469e-b12f-64f70d8cde97\" (UID: \"1c240b88-16fc-469e-b12f-64f70d8cde97\") " Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.511061 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c240b88-16fc-469e-b12f-64f70d8cde97-kube-api-access-kcbhf" (OuterVolumeSpecName: "kube-api-access-kcbhf") pod "1c240b88-16fc-469e-b12f-64f70d8cde97" (UID: "1c240b88-16fc-469e-b12f-64f70d8cde97"). InnerVolumeSpecName "kube-api-access-kcbhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.554935 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c240b88-16fc-469e-b12f-64f70d8cde97" (UID: "1c240b88-16fc-469e-b12f-64f70d8cde97"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.567219 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c240b88-16fc-469e-b12f-64f70d8cde97" (UID: "1c240b88-16fc-469e-b12f-64f70d8cde97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.574595 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-config" (OuterVolumeSpecName: "config") pod "1c240b88-16fc-469e-b12f-64f70d8cde97" (UID: "1c240b88-16fc-469e-b12f-64f70d8cde97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.579121 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c240b88-16fc-469e-b12f-64f70d8cde97" (UID: "1c240b88-16fc-469e-b12f-64f70d8cde97"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.590537 5034 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.590581 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.590597 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.590609 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcbhf\" (UniqueName: \"kubernetes.io/projected/1c240b88-16fc-469e-b12f-64f70d8cde97-kube-api-access-kcbhf\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.590624 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.591347 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c240b88-16fc-469e-b12f-64f70d8cde97" (UID: "1c240b88-16fc-469e-b12f-64f70d8cde97"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:14:20 crc kubenswrapper[5034]: I0105 22:14:20.693488 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c240b88-16fc-469e-b12f-64f70d8cde97-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.140989 5034 generic.go:334] "Generic (PLEG): container finished" podID="9b64a7d9-a1a5-4d7e-9012-c770d15f4267" containerID="019162e89465dcdc97538c62463c0686f10dc17589bc7886a23b68530f20cafc" exitCode=0 Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.141068 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sh4vf" event={"ID":"9b64a7d9-a1a5-4d7e-9012-c770d15f4267","Type":"ContainerDied","Data":"019162e89465dcdc97538c62463c0686f10dc17589bc7886a23b68530f20cafc"} Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.144466 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.145575 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-c5krh" event={"ID":"1c240b88-16fc-469e-b12f-64f70d8cde97","Type":"ContainerDied","Data":"0dfc55b3e22055a1bc7a302296a718c77497ea8f35f289bc3903ea7cef7a0f41"} Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.145662 5034 scope.go:117] "RemoveContainer" containerID="1bed29012010123310b4525d0409cb3ef6a3c05ba563c641a31b318cc916ccd9" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.212645 5034 scope.go:117] "RemoveContainer" containerID="02b44ac4b589fbdb6d5faa7d0afe501cc59257a7b136459596e6f7d7f14cc1f2" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.248778 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-c5krh"] Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.271722 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-c5krh"] Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.674221 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.824532 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-config-data\") pod \"fea79208-89f2-486d-830a-d7ab3bab3342\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.824587 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-combined-ca-bundle\") pod \"fea79208-89f2-486d-830a-d7ab3bab3342\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.824680 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-scripts\") pod \"fea79208-89f2-486d-830a-d7ab3bab3342\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.824727 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvl8p\" (UniqueName: \"kubernetes.io/projected/fea79208-89f2-486d-830a-d7ab3bab3342-kube-api-access-hvl8p\") pod \"fea79208-89f2-486d-830a-d7ab3bab3342\" (UID: \"fea79208-89f2-486d-830a-d7ab3bab3342\") " Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.831475 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea79208-89f2-486d-830a-d7ab3bab3342-kube-api-access-hvl8p" (OuterVolumeSpecName: "kube-api-access-hvl8p") pod "fea79208-89f2-486d-830a-d7ab3bab3342" (UID: "fea79208-89f2-486d-830a-d7ab3bab3342"). InnerVolumeSpecName "kube-api-access-hvl8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.832243 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-scripts" (OuterVolumeSpecName: "scripts") pod "fea79208-89f2-486d-830a-d7ab3bab3342" (UID: "fea79208-89f2-486d-830a-d7ab3bab3342"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.853569 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c240b88-16fc-469e-b12f-64f70d8cde97" path="/var/lib/kubelet/pods/1c240b88-16fc-469e-b12f-64f70d8cde97/volumes" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.858066 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fea79208-89f2-486d-830a-d7ab3bab3342" (UID: "fea79208-89f2-486d-830a-d7ab3bab3342"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.867809 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-config-data" (OuterVolumeSpecName: "config-data") pod "fea79208-89f2-486d-830a-d7ab3bab3342" (UID: "fea79208-89f2-486d-830a-d7ab3bab3342"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.927150 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.927190 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.927206 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea79208-89f2-486d-830a-d7ab3bab3342-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:21 crc kubenswrapper[5034]: I0105 22:14:21.927221 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvl8p\" (UniqueName: \"kubernetes.io/projected/fea79208-89f2-486d-830a-d7ab3bab3342-kube-api-access-hvl8p\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.158934 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ptccl" event={"ID":"fea79208-89f2-486d-830a-d7ab3bab3342","Type":"ContainerDied","Data":"050b7a19e2cb743bc3e3f9e79d0fc19b046bc5020d3e273810811f3e07415d70"} Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.158988 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="050b7a19e2cb743bc3e3f9e79d0fc19b046bc5020d3e273810811f3e07415d70" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.158950 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ptccl" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.267741 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.268550 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" containerName="nova-api-log" containerID="cri-o://40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f" gracePeriod=30 Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.268646 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" containerName="nova-api-api" containerID="cri-o://257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857" gracePeriod=30 Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.323575 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.323783 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a66807a4-0e6f-4cfb-aab8-7624a2874a4a" containerName="nova-scheduler-scheduler" containerID="cri-o://4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50" gracePeriod=30 Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.575776 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.742224 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-combined-ca-bundle\") pod \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.742446 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-scripts\") pod \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.742489 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnlbh\" (UniqueName: \"kubernetes.io/projected/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-kube-api-access-gnlbh\") pod \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.742582 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-config-data\") pod \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\" (UID: \"9b64a7d9-a1a5-4d7e-9012-c770d15f4267\") " Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.748761 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-kube-api-access-gnlbh" (OuterVolumeSpecName: "kube-api-access-gnlbh") pod "9b64a7d9-a1a5-4d7e-9012-c770d15f4267" (UID: "9b64a7d9-a1a5-4d7e-9012-c770d15f4267"). InnerVolumeSpecName "kube-api-access-gnlbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.750682 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-scripts" (OuterVolumeSpecName: "scripts") pod "9b64a7d9-a1a5-4d7e-9012-c770d15f4267" (UID: "9b64a7d9-a1a5-4d7e-9012-c770d15f4267"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.773925 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b64a7d9-a1a5-4d7e-9012-c770d15f4267" (UID: "9b64a7d9-a1a5-4d7e-9012-c770d15f4267"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.774465 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-config-data" (OuterVolumeSpecName: "config-data") pod "9b64a7d9-a1a5-4d7e-9012-c770d15f4267" (UID: "9b64a7d9-a1a5-4d7e-9012-c770d15f4267"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.844518 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.844556 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnlbh\" (UniqueName: \"kubernetes.io/projected/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-kube-api-access-gnlbh\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.844573 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:22 crc kubenswrapper[5034]: I0105 22:14:22.844585 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b64a7d9-a1a5-4d7e-9012-c770d15f4267-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.169814 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sh4vf" event={"ID":"9b64a7d9-a1a5-4d7e-9012-c770d15f4267","Type":"ContainerDied","Data":"a6d324dd629efd7f019950e0592d1d0f6d8e5e5ce0db7958c4e29f405f2fed9b"} Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.170134 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6d324dd629efd7f019950e0592d1d0f6d8e5e5ce0db7958c4e29f405f2fed9b" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.170206 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sh4vf" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.176347 5034 generic.go:334] "Generic (PLEG): container finished" podID="20991e9c-6454-4881-9e4a-b314f666f34e" containerID="40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f" exitCode=143 Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.176395 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20991e9c-6454-4881-9e4a-b314f666f34e","Type":"ContainerDied","Data":"40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f"} Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.256168 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 22:14:23 crc kubenswrapper[5034]: E0105 22:14:23.256561 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c240b88-16fc-469e-b12f-64f70d8cde97" containerName="init" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.256579 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c240b88-16fc-469e-b12f-64f70d8cde97" containerName="init" Jan 05 22:14:23 crc kubenswrapper[5034]: E0105 22:14:23.256599 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b64a7d9-a1a5-4d7e-9012-c770d15f4267" containerName="nova-cell1-conductor-db-sync" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.256606 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b64a7d9-a1a5-4d7e-9012-c770d15f4267" containerName="nova-cell1-conductor-db-sync" Jan 05 22:14:23 crc kubenswrapper[5034]: E0105 22:14:23.256637 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea79208-89f2-486d-830a-d7ab3bab3342" containerName="nova-manage" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.256644 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea79208-89f2-486d-830a-d7ab3bab3342" containerName="nova-manage" Jan 05 22:14:23 crc kubenswrapper[5034]: E0105 22:14:23.256662 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c240b88-16fc-469e-b12f-64f70d8cde97" containerName="dnsmasq-dns" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.256668 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c240b88-16fc-469e-b12f-64f70d8cde97" containerName="dnsmasq-dns" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.256832 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b64a7d9-a1a5-4d7e-9012-c770d15f4267" containerName="nova-cell1-conductor-db-sync" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.256846 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea79208-89f2-486d-830a-d7ab3bab3342" containerName="nova-manage" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.256863 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c240b88-16fc-469e-b12f-64f70d8cde97" containerName="dnsmasq-dns" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.257645 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.263124 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.275093 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.354812 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.354882 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.355012 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8t2p\" (UniqueName: \"kubernetes.io/projected/a4a7982e-25f8-4f97-9db5-1c828835ae84-kube-api-access-c8t2p\") pod \"nova-cell1-conductor-0\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.459201 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.459314 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.459505 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8t2p\" (UniqueName: \"kubernetes.io/projected/a4a7982e-25f8-4f97-9db5-1c828835ae84-kube-api-access-c8t2p\") pod \"nova-cell1-conductor-0\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.465349 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.465900 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.489926 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8t2p\" (UniqueName: \"kubernetes.io/projected/a4a7982e-25f8-4f97-9db5-1c828835ae84-kube-api-access-c8t2p\") pod \"nova-cell1-conductor-0\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: I0105 22:14:23.575764 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:23 crc kubenswrapper[5034]: E0105 22:14:23.754157 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50 is running failed: container process not found" containerID="4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 22:14:23 crc kubenswrapper[5034]: E0105 22:14:23.755072 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50 is running failed: container process not found" containerID="4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 22:14:23 crc kubenswrapper[5034]: E0105 22:14:23.755515 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50 is running failed: container process not found" containerID="4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 22:14:23 crc kubenswrapper[5034]: E0105 22:14:23.755552 5034 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a66807a4-0e6f-4cfb-aab8-7624a2874a4a" containerName="nova-scheduler-scheduler" Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.077734 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.192297 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.192615 5034 generic.go:334] "Generic (PLEG): container finished" podID="a66807a4-0e6f-4cfb-aab8-7624a2874a4a" containerID="4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50" exitCode=0 Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.192688 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a66807a4-0e6f-4cfb-aab8-7624a2874a4a","Type":"ContainerDied","Data":"4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50"} Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.192716 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a66807a4-0e6f-4cfb-aab8-7624a2874a4a","Type":"ContainerDied","Data":"c4a1bf0c417d9f58af2fed432e5ad6048e86b66785fb2dd37ee76f5f0ed57e86"} Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.192737 5034 scope.go:117] "RemoveContainer" containerID="4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50" Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.195670 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a4a7982e-25f8-4f97-9db5-1c828835ae84","Type":"ContainerStarted","Data":"fdb5efb4b03dad314ade9b05102ed3489f7acd9f9959529f443cffed875fc576"} Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.226111 5034 scope.go:117] "RemoveContainer" containerID="4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50" Jan 05 22:14:24 crc kubenswrapper[5034]: E0105 22:14:24.226515 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50\": container with ID starting with 4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50 not found: ID does not exist" containerID="4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50" Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.226555 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50"} err="failed to get container status \"4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50\": rpc error: code = NotFound desc = could not find container \"4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50\": container with ID starting with 4e17b67c3d35fc49f205964b25e389f1102732f7e6d8b30e4bb10ce9a3fd4a50 not found: ID does not exist" Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.286153 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-combined-ca-bundle\") pod \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.286626 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-config-data\") pod \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.286847 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cl8v\" (UniqueName: \"kubernetes.io/projected/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-kube-api-access-6cl8v\") pod \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\" (UID: \"a66807a4-0e6f-4cfb-aab8-7624a2874a4a\") " Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.291056 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-kube-api-access-6cl8v" (OuterVolumeSpecName: "kube-api-access-6cl8v") pod "a66807a4-0e6f-4cfb-aab8-7624a2874a4a" (UID: "a66807a4-0e6f-4cfb-aab8-7624a2874a4a"). InnerVolumeSpecName "kube-api-access-6cl8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.312055 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-config-data" (OuterVolumeSpecName: "config-data") pod "a66807a4-0e6f-4cfb-aab8-7624a2874a4a" (UID: "a66807a4-0e6f-4cfb-aab8-7624a2874a4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.315955 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a66807a4-0e6f-4cfb-aab8-7624a2874a4a" (UID: "a66807a4-0e6f-4cfb-aab8-7624a2874a4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.389741 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.389782 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:24 crc kubenswrapper[5034]: I0105 22:14:24.389794 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cl8v\" (UniqueName: \"kubernetes.io/projected/a66807a4-0e6f-4cfb-aab8-7624a2874a4a-kube-api-access-6cl8v\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.210233 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a4a7982e-25f8-4f97-9db5-1c828835ae84","Type":"ContainerStarted","Data":"bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a"} Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.210802 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.211612 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.238578 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.238557202 podStartE2EDuration="2.238557202s" podCreationTimestamp="2026-01-05 22:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:25.228147737 +0000 UTC m=+1357.600147196" watchObservedRunningTime="2026-01-05 22:14:25.238557202 +0000 UTC m=+1357.610556641" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.266740 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.282226 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.294821 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:14:25 crc kubenswrapper[5034]: E0105 22:14:25.295549 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a66807a4-0e6f-4cfb-aab8-7624a2874a4a" containerName="nova-scheduler-scheduler" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.295581 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a66807a4-0e6f-4cfb-aab8-7624a2874a4a" containerName="nova-scheduler-scheduler" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.295818 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66807a4-0e6f-4cfb-aab8-7624a2874a4a" containerName="nova-scheduler-scheduler" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.296822 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.298779 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.313980 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.413729 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xxc\" (UniqueName: \"kubernetes.io/projected/c3cc7b05-b609-4653-bf64-051aa3e11519-kube-api-access-q6xxc\") pod \"nova-scheduler-0\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.413843 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.413970 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-config-data\") pod \"nova-scheduler-0\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.515960 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6xxc\" (UniqueName: \"kubernetes.io/projected/c3cc7b05-b609-4653-bf64-051aa3e11519-kube-api-access-q6xxc\") pod \"nova-scheduler-0\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.516095 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.516164 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-config-data\") pod \"nova-scheduler-0\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.524269 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-config-data\") pod \"nova-scheduler-0\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.528692 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.539286 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6xxc\" (UniqueName: \"kubernetes.io/projected/c3cc7b05-b609-4653-bf64-051aa3e11519-kube-api-access-q6xxc\") pod \"nova-scheduler-0\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.618281 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.854368 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66807a4-0e6f-4cfb-aab8-7624a2874a4a" path="/var/lib/kubelet/pods/a66807a4-0e6f-4cfb-aab8-7624a2874a4a/volumes" Jan 05 22:14:25 crc kubenswrapper[5034]: I0105 22:14:25.945255 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.027345 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb94w\" (UniqueName: \"kubernetes.io/projected/20991e9c-6454-4881-9e4a-b314f666f34e-kube-api-access-pb94w\") pod \"20991e9c-6454-4881-9e4a-b314f666f34e\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.027871 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20991e9c-6454-4881-9e4a-b314f666f34e-logs\") pod \"20991e9c-6454-4881-9e4a-b314f666f34e\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.027924 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-config-data\") pod \"20991e9c-6454-4881-9e4a-b314f666f34e\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.028041 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-combined-ca-bundle\") pod \"20991e9c-6454-4881-9e4a-b314f666f34e\" (UID: \"20991e9c-6454-4881-9e4a-b314f666f34e\") " Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.032981 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20991e9c-6454-4881-9e4a-b314f666f34e-kube-api-access-pb94w" (OuterVolumeSpecName: "kube-api-access-pb94w") pod "20991e9c-6454-4881-9e4a-b314f666f34e" (UID: "20991e9c-6454-4881-9e4a-b314f666f34e"). InnerVolumeSpecName "kube-api-access-pb94w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.044712 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20991e9c-6454-4881-9e4a-b314f666f34e-logs" (OuterVolumeSpecName: "logs") pod "20991e9c-6454-4881-9e4a-b314f666f34e" (UID: "20991e9c-6454-4881-9e4a-b314f666f34e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.056331 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20991e9c-6454-4881-9e4a-b314f666f34e" (UID: "20991e9c-6454-4881-9e4a-b314f666f34e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.064096 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-config-data" (OuterVolumeSpecName: "config-data") pod "20991e9c-6454-4881-9e4a-b314f666f34e" (UID: "20991e9c-6454-4881-9e4a-b314f666f34e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.130500 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb94w\" (UniqueName: \"kubernetes.io/projected/20991e9c-6454-4881-9e4a-b314f666f34e-kube-api-access-pb94w\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.130551 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20991e9c-6454-4881-9e4a-b314f666f34e-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.130565 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.130633 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20991e9c-6454-4881-9e4a-b314f666f34e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.173735 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.222955 5034 generic.go:334] "Generic (PLEG): container finished" podID="20991e9c-6454-4881-9e4a-b314f666f34e" containerID="257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857" exitCode=0 Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.223094 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20991e9c-6454-4881-9e4a-b314f666f34e","Type":"ContainerDied","Data":"257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857"} Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.223179 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20991e9c-6454-4881-9e4a-b314f666f34e","Type":"ContainerDied","Data":"399f14d52c23c6bf0a1061b3b4d5c6eda31134681bc2ae69ff42c0b3b505d821"} Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.223215 5034 scope.go:117] "RemoveContainer" containerID="257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.223437 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.229648 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c3cc7b05-b609-4653-bf64-051aa3e11519","Type":"ContainerStarted","Data":"1aaf51d807934681fd5c94f6b8fbefc95c096cf0cad6594539eb259893337fbf"} Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.250004 5034 scope.go:117] "RemoveContainer" containerID="40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.287826 5034 scope.go:117] "RemoveContainer" containerID="257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857" Jan 05 22:14:26 crc kubenswrapper[5034]: E0105 22:14:26.288384 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857\": container with ID starting with 257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857 not found: ID does not exist" containerID="257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.288434 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857"} err="failed to get container status \"257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857\": rpc error: code = NotFound desc = could not find container \"257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857\": container with ID starting with 257de2fde6d5debd2c8bb23f12d605a2ff8d60888bbecf5538d289c67db6b857 not found: ID does not exist" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.288459 5034 scope.go:117] "RemoveContainer" containerID="40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f" Jan 05 22:14:26 crc kubenswrapper[5034]: E0105 22:14:26.288935 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f\": container with ID starting with 40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f not found: ID does not exist" containerID="40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.288985 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f"} err="failed to get container status \"40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f\": rpc error: code = NotFound desc = could not find container \"40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f\": container with ID starting with 40f2cddc96ffeaaf7a0a713a2a4a73d67fd6e2bf3549835783764b364ee3284f not found: ID does not exist" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.290701 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.310753 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.325544 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:26 crc kubenswrapper[5034]: E0105 22:14:26.326190 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" containerName="nova-api-log" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.326212 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" containerName="nova-api-log" Jan 05 22:14:26 crc kubenswrapper[5034]: E0105 22:14:26.326275 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" containerName="nova-api-api" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.326289 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" containerName="nova-api-api" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.326501 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" containerName="nova-api-api" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.326526 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" containerName="nova-api-log" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.327584 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.330340 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.343681 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.436554 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-config-data\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.436603 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-logs\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.436629 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.436741 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htvtw\" (UniqueName: \"kubernetes.io/projected/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-kube-api-access-htvtw\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.538552 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-logs\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.538887 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.538997 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htvtw\" (UniqueName: \"kubernetes.io/projected/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-kube-api-access-htvtw\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.539061 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-config-data\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.539962 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-logs\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.547864 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.548063 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-config-data\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.556660 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htvtw\" (UniqueName: \"kubernetes.io/projected/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-kube-api-access-htvtw\") pod \"nova-api-0\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " pod="openstack/nova-api-0" Jan 05 22:14:26 crc kubenswrapper[5034]: I0105 22:14:26.647569 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:14:27 crc kubenswrapper[5034]: I0105 22:14:27.141485 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:27 crc kubenswrapper[5034]: W0105 22:14:27.156978 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc157dd0a_3ebf_4bb2_af9f_189d1a47af02.slice/crio-f61381f8218e2d740364289e7f8cb40606ace41b192c06cb8a74c06fb1271c4f WatchSource:0}: Error finding container f61381f8218e2d740364289e7f8cb40606ace41b192c06cb8a74c06fb1271c4f: Status 404 returned error can't find the container with id f61381f8218e2d740364289e7f8cb40606ace41b192c06cb8a74c06fb1271c4f Jan 05 22:14:27 crc kubenswrapper[5034]: I0105 22:14:27.241906 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c3cc7b05-b609-4653-bf64-051aa3e11519","Type":"ContainerStarted","Data":"4203de1c272e07322c3cb2fb23ab8e191fcd3c0e7992de217064a0186d50eb84"} Jan 05 22:14:27 crc kubenswrapper[5034]: I0105 22:14:27.244559 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c157dd0a-3ebf-4bb2-af9f-189d1a47af02","Type":"ContainerStarted","Data":"f61381f8218e2d740364289e7f8cb40606ace41b192c06cb8a74c06fb1271c4f"} Jan 05 22:14:27 crc kubenswrapper[5034]: I0105 22:14:27.257418 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.257402116 podStartE2EDuration="2.257402116s" podCreationTimestamp="2026-01-05 22:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:27.256554882 +0000 UTC m=+1359.628554321" watchObservedRunningTime="2026-01-05 22:14:27.257402116 +0000 UTC m=+1359.629401555" Jan 05 22:14:27 crc kubenswrapper[5034]: I0105 22:14:27.852066 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20991e9c-6454-4881-9e4a-b314f666f34e" path="/var/lib/kubelet/pods/20991e9c-6454-4881-9e4a-b314f666f34e/volumes" Jan 05 22:14:28 crc kubenswrapper[5034]: I0105 22:14:28.259187 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c157dd0a-3ebf-4bb2-af9f-189d1a47af02","Type":"ContainerStarted","Data":"81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1"} Jan 05 22:14:28 crc kubenswrapper[5034]: I0105 22:14:28.259298 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c157dd0a-3ebf-4bb2-af9f-189d1a47af02","Type":"ContainerStarted","Data":"9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c"} Jan 05 22:14:28 crc kubenswrapper[5034]: I0105 22:14:28.288418 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.28839495 podStartE2EDuration="2.28839495s" podCreationTimestamp="2026-01-05 22:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:28.27957213 +0000 UTC m=+1360.651571579" watchObservedRunningTime="2026-01-05 22:14:28.28839495 +0000 UTC m=+1360.660394389" Jan 05 22:14:30 crc kubenswrapper[5034]: I0105 22:14:30.619292 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 22:14:33 crc kubenswrapper[5034]: I0105 22:14:33.616961 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 05 22:14:35 crc kubenswrapper[5034]: I0105 22:14:35.619509 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 05 22:14:35 crc kubenswrapper[5034]: I0105 22:14:35.668422 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 05 22:14:36 crc kubenswrapper[5034]: I0105 22:14:36.121656 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 05 22:14:36 crc kubenswrapper[5034]: I0105 22:14:36.398621 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 05 22:14:36 crc kubenswrapper[5034]: I0105 22:14:36.647918 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 22:14:36 crc kubenswrapper[5034]: I0105 22:14:36.647979 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 22:14:37 crc kubenswrapper[5034]: I0105 22:14:37.731408 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 22:14:37 crc kubenswrapper[5034]: I0105 22:14:37.731446 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 22:14:40 crc kubenswrapper[5034]: I0105 22:14:40.227057 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:14:40 crc kubenswrapper[5034]: I0105 22:14:40.227859 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5dc94453-3c0f-4b4c-a23e-f2c88e41325c" containerName="kube-state-metrics" containerID="cri-o://85dfecff8a53f1e91c768d3fcda1317cf930084d81b47ee5eed865627d3cd7c7" gracePeriod=30 Jan 05 22:14:40 crc kubenswrapper[5034]: I0105 22:14:40.401548 5034 generic.go:334] "Generic (PLEG): container finished" podID="5dc94453-3c0f-4b4c-a23e-f2c88e41325c" containerID="85dfecff8a53f1e91c768d3fcda1317cf930084d81b47ee5eed865627d3cd7c7" exitCode=2 Jan 05 22:14:40 crc kubenswrapper[5034]: I0105 22:14:40.401639 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5dc94453-3c0f-4b4c-a23e-f2c88e41325c","Type":"ContainerDied","Data":"85dfecff8a53f1e91c768d3fcda1317cf930084d81b47ee5eed865627d3cd7c7"} Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.280959 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.413183 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5dc94453-3c0f-4b4c-a23e-f2c88e41325c","Type":"ContainerDied","Data":"2355cd83acf22f29b5bfd6e6328495f2124a5aba3ef0e1054f2463e56e07e20b"} Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.413244 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.413353 5034 scope.go:117] "RemoveContainer" containerID="85dfecff8a53f1e91c768d3fcda1317cf930084d81b47ee5eed865627d3cd7c7" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.450739 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrzdg\" (UniqueName: \"kubernetes.io/projected/5dc94453-3c0f-4b4c-a23e-f2c88e41325c-kube-api-access-lrzdg\") pod \"5dc94453-3c0f-4b4c-a23e-f2c88e41325c\" (UID: \"5dc94453-3c0f-4b4c-a23e-f2c88e41325c\") " Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.457426 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc94453-3c0f-4b4c-a23e-f2c88e41325c-kube-api-access-lrzdg" (OuterVolumeSpecName: "kube-api-access-lrzdg") pod "5dc94453-3c0f-4b4c-a23e-f2c88e41325c" (UID: "5dc94453-3c0f-4b4c-a23e-f2c88e41325c"). InnerVolumeSpecName "kube-api-access-lrzdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.555047 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrzdg\" (UniqueName: \"kubernetes.io/projected/5dc94453-3c0f-4b4c-a23e-f2c88e41325c-kube-api-access-lrzdg\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.751426 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.766242 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.779825 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:14:41 crc kubenswrapper[5034]: E0105 22:14:41.780425 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc94453-3c0f-4b4c-a23e-f2c88e41325c" containerName="kube-state-metrics" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.780453 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc94453-3c0f-4b4c-a23e-f2c88e41325c" containerName="kube-state-metrics" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.780644 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc94453-3c0f-4b4c-a23e-f2c88e41325c" containerName="kube-state-metrics" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.781499 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.789768 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.799658 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.807643 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.851919 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dc94453-3c0f-4b4c-a23e-f2c88e41325c" path="/var/lib/kubelet/pods/5dc94453-3c0f-4b4c-a23e-f2c88e41325c/volumes" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.966682 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.966769 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.966797 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:41 crc kubenswrapper[5034]: I0105 22:14:41.967337 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xzhh\" (UniqueName: \"kubernetes.io/projected/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-api-access-2xzhh\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.069707 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.069823 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.069857 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.069953 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xzhh\" (UniqueName: \"kubernetes.io/projected/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-api-access-2xzhh\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.087670 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.088271 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="ceilometer-central-agent" containerID="cri-o://f875cd7858cfa25812f7e00c47f3309348048d27de4fe4dbb9d4560d0d332425" gracePeriod=30 Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.089441 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="proxy-httpd" containerID="cri-o://1cd1d27f410ceb0b6bf42fbbb1890ee47957e6d4db04e579e0044c68320a7d35" gracePeriod=30 Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.092606 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.089657 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="sg-core" containerID="cri-o://6dc7cc9893d00a09bfca4e93af22739fce7496bbdb2a3f40335dada1b451dafd" gracePeriod=30 Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.089613 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="ceilometer-notification-agent" containerID="cri-o://5b91701d06e28e42ec9fc0c9972bb0d07b952d7c7701630f2489b5a58b251522" gracePeriod=30 Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.093550 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.096181 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.105474 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xzhh\" (UniqueName: \"kubernetes.io/projected/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-api-access-2xzhh\") pod \"kube-state-metrics-0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " pod="openstack/kube-state-metrics-0" Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.109504 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.425136 5034 generic.go:334] "Generic (PLEG): container finished" podID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerID="1cd1d27f410ceb0b6bf42fbbb1890ee47957e6d4db04e579e0044c68320a7d35" exitCode=0 Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.425521 5034 generic.go:334] "Generic (PLEG): container finished" podID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerID="6dc7cc9893d00a09bfca4e93af22739fce7496bbdb2a3f40335dada1b451dafd" exitCode=2 Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.425273 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cd44c4-477b-449c-820d-33de1ef0dba1","Type":"ContainerDied","Data":"1cd1d27f410ceb0b6bf42fbbb1890ee47957e6d4db04e579e0044c68320a7d35"} Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.425626 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cd44c4-477b-449c-820d-33de1ef0dba1","Type":"ContainerDied","Data":"6dc7cc9893d00a09bfca4e93af22739fce7496bbdb2a3f40335dada1b451dafd"} Jan 05 22:14:42 crc kubenswrapper[5034]: I0105 22:14:42.622445 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:14:43 crc kubenswrapper[5034]: I0105 22:14:43.453706 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5c91a26e-489c-40b8-bf4b-b60f65431df0","Type":"ContainerStarted","Data":"993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8"} Jan 05 22:14:43 crc kubenswrapper[5034]: I0105 22:14:43.454188 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5c91a26e-489c-40b8-bf4b-b60f65431df0","Type":"ContainerStarted","Data":"fa8f608c10f02c3a911a3f3f9c03f1342a44666905c20c802565250bd054c327"} Jan 05 22:14:43 crc kubenswrapper[5034]: I0105 22:14:43.454209 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 05 22:14:43 crc kubenswrapper[5034]: I0105 22:14:43.456850 5034 generic.go:334] "Generic (PLEG): container finished" podID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerID="5b91701d06e28e42ec9fc0c9972bb0d07b952d7c7701630f2489b5a58b251522" exitCode=0 Jan 05 22:14:43 crc kubenswrapper[5034]: I0105 22:14:43.456868 5034 generic.go:334] "Generic (PLEG): container finished" podID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerID="f875cd7858cfa25812f7e00c47f3309348048d27de4fe4dbb9d4560d0d332425" exitCode=0 Jan 05 22:14:43 crc kubenswrapper[5034]: I0105 22:14:43.456885 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cd44c4-477b-449c-820d-33de1ef0dba1","Type":"ContainerDied","Data":"5b91701d06e28e42ec9fc0c9972bb0d07b952d7c7701630f2489b5a58b251522"} Jan 05 22:14:43 crc kubenswrapper[5034]: I0105 22:14:43.456900 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cd44c4-477b-449c-820d-33de1ef0dba1","Type":"ContainerDied","Data":"f875cd7858cfa25812f7e00c47f3309348048d27de4fe4dbb9d4560d0d332425"} Jan 05 22:14:43 crc kubenswrapper[5034]: I0105 22:14:43.475332 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.075243717 podStartE2EDuration="2.475315184s" podCreationTimestamp="2026-01-05 22:14:41 +0000 UTC" firstStartedPulling="2026-01-05 22:14:42.640873108 +0000 UTC m=+1375.012872547" lastFinishedPulling="2026-01-05 22:14:43.040944575 +0000 UTC m=+1375.412944014" observedRunningTime="2026-01-05 22:14:43.471021272 +0000 UTC m=+1375.843020721" watchObservedRunningTime="2026-01-05 22:14:43.475315184 +0000 UTC m=+1375.847314623" Jan 05 22:14:43 crc kubenswrapper[5034]: I0105 22:14:43.867160 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.020683 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-combined-ca-bundle\") pod \"58cd44c4-477b-449c-820d-33de1ef0dba1\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.020765 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-log-httpd\") pod \"58cd44c4-477b-449c-820d-33de1ef0dba1\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.021815 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "58cd44c4-477b-449c-820d-33de1ef0dba1" (UID: "58cd44c4-477b-449c-820d-33de1ef0dba1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.021942 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f2lr\" (UniqueName: \"kubernetes.io/projected/58cd44c4-477b-449c-820d-33de1ef0dba1-kube-api-access-4f2lr\") pod \"58cd44c4-477b-449c-820d-33de1ef0dba1\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.021975 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-scripts\") pod \"58cd44c4-477b-449c-820d-33de1ef0dba1\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.022509 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-run-httpd\") pod \"58cd44c4-477b-449c-820d-33de1ef0dba1\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.022602 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-config-data\") pod \"58cd44c4-477b-449c-820d-33de1ef0dba1\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.022789 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-sg-core-conf-yaml\") pod \"58cd44c4-477b-449c-820d-33de1ef0dba1\" (UID: \"58cd44c4-477b-449c-820d-33de1ef0dba1\") " Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.023603 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "58cd44c4-477b-449c-820d-33de1ef0dba1" (UID: "58cd44c4-477b-449c-820d-33de1ef0dba1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.023658 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.028187 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-scripts" (OuterVolumeSpecName: "scripts") pod "58cd44c4-477b-449c-820d-33de1ef0dba1" (UID: "58cd44c4-477b-449c-820d-33de1ef0dba1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.030413 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58cd44c4-477b-449c-820d-33de1ef0dba1-kube-api-access-4f2lr" (OuterVolumeSpecName: "kube-api-access-4f2lr") pod "58cd44c4-477b-449c-820d-33de1ef0dba1" (UID: "58cd44c4-477b-449c-820d-33de1ef0dba1"). InnerVolumeSpecName "kube-api-access-4f2lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.054172 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "58cd44c4-477b-449c-820d-33de1ef0dba1" (UID: "58cd44c4-477b-449c-820d-33de1ef0dba1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.099660 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58cd44c4-477b-449c-820d-33de1ef0dba1" (UID: "58cd44c4-477b-449c-820d-33de1ef0dba1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.125750 5034 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.125782 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.125793 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f2lr\" (UniqueName: \"kubernetes.io/projected/58cd44c4-477b-449c-820d-33de1ef0dba1-kube-api-access-4f2lr\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.125806 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.125814 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cd44c4-477b-449c-820d-33de1ef0dba1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.140464 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-config-data" (OuterVolumeSpecName: "config-data") pod "58cd44c4-477b-449c-820d-33de1ef0dba1" (UID: "58cd44c4-477b-449c-820d-33de1ef0dba1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.229192 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cd44c4-477b-449c-820d-33de1ef0dba1-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.486206 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cd44c4-477b-449c-820d-33de1ef0dba1","Type":"ContainerDied","Data":"44081a168988efbee913bf8e8c0173a3f1cf1e34d97e9c806ac39c7d2a29ad13"} Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.486303 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.487770 5034 scope.go:117] "RemoveContainer" containerID="1cd1d27f410ceb0b6bf42fbbb1890ee47957e6d4db04e579e0044c68320a7d35" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.537549 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.546551 5034 scope.go:117] "RemoveContainer" containerID="6dc7cc9893d00a09bfca4e93af22739fce7496bbdb2a3f40335dada1b451dafd" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.552635 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.565813 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:44 crc kubenswrapper[5034]: E0105 22:14:44.566426 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="ceilometer-central-agent" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.566446 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="ceilometer-central-agent" Jan 05 22:14:44 crc kubenswrapper[5034]: E0105 22:14:44.566466 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="sg-core" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.566480 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="sg-core" Jan 05 22:14:44 crc kubenswrapper[5034]: E0105 22:14:44.566512 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="proxy-httpd" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.566521 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="proxy-httpd" Jan 05 22:14:44 crc kubenswrapper[5034]: E0105 22:14:44.566547 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="ceilometer-notification-agent" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.566556 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="ceilometer-notification-agent" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.566837 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="proxy-httpd" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.566864 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="ceilometer-central-agent" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.566874 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="ceilometer-notification-agent" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.566888 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" containerName="sg-core" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.568796 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.574389 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.576340 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.576939 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.582404 5034 scope.go:117] "RemoveContainer" containerID="5b91701d06e28e42ec9fc0c9972bb0d07b952d7c7701630f2489b5a58b251522" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.597591 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.623452 5034 scope.go:117] "RemoveContainer" containerID="f875cd7858cfa25812f7e00c47f3309348048d27de4fe4dbb9d4560d0d332425" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.739014 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.739519 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-run-httpd\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.739547 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-log-httpd\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.739671 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-config-data\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.739719 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng82h\" (UniqueName: \"kubernetes.io/projected/0ba2a499-6032-4042-8239-9867cf3a2968-kube-api-access-ng82h\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.739746 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.739806 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-scripts\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.741465 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.843958 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-run-httpd\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.844020 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-log-httpd\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.844062 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-config-data\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.844139 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng82h\" (UniqueName: \"kubernetes.io/projected/0ba2a499-6032-4042-8239-9867cf3a2968-kube-api-access-ng82h\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.844184 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.844226 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-scripts\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.844264 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.844362 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.844800 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-run-httpd\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.848764 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-log-httpd\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.850062 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.850665 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.853229 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-config-data\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.861863 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.863579 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng82h\" (UniqueName: \"kubernetes.io/projected/0ba2a499-6032-4042-8239-9867cf3a2968-kube-api-access-ng82h\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.866549 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-scripts\") pod \"ceilometer-0\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " pod="openstack/ceilometer-0" Jan 05 22:14:44 crc kubenswrapper[5034]: I0105 22:14:44.890356 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:45 crc kubenswrapper[5034]: I0105 22:14:45.277397 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:45 crc kubenswrapper[5034]: I0105 22:14:45.502882 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ba2a499-6032-4042-8239-9867cf3a2968","Type":"ContainerStarted","Data":"3abcca521597427f05d86169a4169b15287af0be475ef8554ce426c7f24dcd3e"} Jan 05 22:14:45 crc kubenswrapper[5034]: I0105 22:14:45.853571 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58cd44c4-477b-449c-820d-33de1ef0dba1" path="/var/lib/kubelet/pods/58cd44c4-477b-449c-820d-33de1ef0dba1/volumes" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.520525 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.527388 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ba2a499-6032-4042-8239-9867cf3a2968","Type":"ContainerStarted","Data":"710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f"} Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.533130 5034 generic.go:334] "Generic (PLEG): container finished" podID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" containerID="315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0" exitCode=137 Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.533178 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.533200 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67aa8cdc-bdff-4d44-a99b-4135eda9265a","Type":"ContainerDied","Data":"315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0"} Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.533231 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67aa8cdc-bdff-4d44-a99b-4135eda9265a","Type":"ContainerDied","Data":"31bfdce9395d4cbdb86b5e1d31566ca37bbd6c1b1cdb0425b46e8422fa7f9d08"} Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.533249 5034 scope.go:117] "RemoveContainer" containerID="315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.536462 5034 generic.go:334] "Generic (PLEG): container finished" podID="40fdb124-8bf9-40af-ac43-5d6c1de9a948" containerID="32aececbfbae1f5b2913e1b1a33025ebd94532b11413e617c74ff37f88258f15" exitCode=137 Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.536488 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"40fdb124-8bf9-40af-ac43-5d6c1de9a948","Type":"ContainerDied","Data":"32aececbfbae1f5b2913e1b1a33025ebd94532b11413e617c74ff37f88258f15"} Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.553457 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.587771 5034 scope.go:117] "RemoveContainer" containerID="01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.607899 5034 scope.go:117] "RemoveContainer" containerID="315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0" Jan 05 22:14:46 crc kubenswrapper[5034]: E0105 22:14:46.608924 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0\": container with ID starting with 315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0 not found: ID does not exist" containerID="315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.608983 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0"} err="failed to get container status \"315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0\": rpc error: code = NotFound desc = could not find container \"315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0\": container with ID starting with 315eb39f4d0101e4a4c0766a744a6c7b17cbc9ae612fc69c4b84b3ac9de0e9b0 not found: ID does not exist" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.609028 5034 scope.go:117] "RemoveContainer" containerID="01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380" Jan 05 22:14:46 crc kubenswrapper[5034]: E0105 22:14:46.609622 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380\": container with ID starting with 01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380 not found: ID does not exist" containerID="01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.609688 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380"} err="failed to get container status \"01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380\": rpc error: code = NotFound desc = could not find container \"01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380\": container with ID starting with 01990e78a04ab18a61cf098e359813d0e48fb9596355478d7f91dcc5090d5380 not found: ID does not exist" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.652618 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.653752 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.656736 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.656871 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.695620 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67aa8cdc-bdff-4d44-a99b-4135eda9265a-logs\") pod \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.695747 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb6tb\" (UniqueName: \"kubernetes.io/projected/40fdb124-8bf9-40af-ac43-5d6c1de9a948-kube-api-access-qb6tb\") pod \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.695804 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-config-data\") pod \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.695833 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdvq8\" (UniqueName: \"kubernetes.io/projected/67aa8cdc-bdff-4d44-a99b-4135eda9265a-kube-api-access-hdvq8\") pod \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.696179 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-combined-ca-bundle\") pod \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\" (UID: \"40fdb124-8bf9-40af-ac43-5d6c1de9a948\") " Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.696222 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-config-data\") pod \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.696226 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67aa8cdc-bdff-4d44-a99b-4135eda9265a-logs" (OuterVolumeSpecName: "logs") pod "67aa8cdc-bdff-4d44-a99b-4135eda9265a" (UID: "67aa8cdc-bdff-4d44-a99b-4135eda9265a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.696251 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-combined-ca-bundle\") pod \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\" (UID: \"67aa8cdc-bdff-4d44-a99b-4135eda9265a\") " Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.697567 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67aa8cdc-bdff-4d44-a99b-4135eda9265a-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.702064 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40fdb124-8bf9-40af-ac43-5d6c1de9a948-kube-api-access-qb6tb" (OuterVolumeSpecName: "kube-api-access-qb6tb") pod "40fdb124-8bf9-40af-ac43-5d6c1de9a948" (UID: "40fdb124-8bf9-40af-ac43-5d6c1de9a948"). InnerVolumeSpecName "kube-api-access-qb6tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.703206 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67aa8cdc-bdff-4d44-a99b-4135eda9265a-kube-api-access-hdvq8" (OuterVolumeSpecName: "kube-api-access-hdvq8") pod "67aa8cdc-bdff-4d44-a99b-4135eda9265a" (UID: "67aa8cdc-bdff-4d44-a99b-4135eda9265a"). InnerVolumeSpecName "kube-api-access-hdvq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.735613 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-config-data" (OuterVolumeSpecName: "config-data") pod "67aa8cdc-bdff-4d44-a99b-4135eda9265a" (UID: "67aa8cdc-bdff-4d44-a99b-4135eda9265a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.737013 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67aa8cdc-bdff-4d44-a99b-4135eda9265a" (UID: "67aa8cdc-bdff-4d44-a99b-4135eda9265a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.740355 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40fdb124-8bf9-40af-ac43-5d6c1de9a948" (UID: "40fdb124-8bf9-40af-ac43-5d6c1de9a948"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.745652 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-config-data" (OuterVolumeSpecName: "config-data") pod "40fdb124-8bf9-40af-ac43-5d6c1de9a948" (UID: "40fdb124-8bf9-40af-ac43-5d6c1de9a948"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.799230 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.799256 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.799265 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67aa8cdc-bdff-4d44-a99b-4135eda9265a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.799273 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb6tb\" (UniqueName: \"kubernetes.io/projected/40fdb124-8bf9-40af-ac43-5d6c1de9a948-kube-api-access-qb6tb\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.799285 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fdb124-8bf9-40af-ac43-5d6c1de9a948-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.799294 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdvq8\" (UniqueName: \"kubernetes.io/projected/67aa8cdc-bdff-4d44-a99b-4135eda9265a-kube-api-access-hdvq8\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.870693 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.885109 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.894701 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:14:46 crc kubenswrapper[5034]: E0105 22:14:46.895846 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fdb124-8bf9-40af-ac43-5d6c1de9a948" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.895876 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fdb124-8bf9-40af-ac43-5d6c1de9a948" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 22:14:46 crc kubenswrapper[5034]: E0105 22:14:46.895895 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" containerName="nova-metadata-log" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.895901 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" containerName="nova-metadata-log" Jan 05 22:14:46 crc kubenswrapper[5034]: E0105 22:14:46.895933 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" containerName="nova-metadata-metadata" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.895940 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" containerName="nova-metadata-metadata" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.896129 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fdb124-8bf9-40af-ac43-5d6c1de9a948" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.896149 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" containerName="nova-metadata-log" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.896163 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" containerName="nova-metadata-metadata" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.897109 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.902547 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.904617 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 22:14:46 crc kubenswrapper[5034]: I0105 22:14:46.917186 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.005372 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-config-data\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.005739 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq88k\" (UniqueName: \"kubernetes.io/projected/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-kube-api-access-sq88k\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.006225 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-logs\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.006432 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.006581 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.109030 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-logs\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.109168 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.109249 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.109279 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-config-data\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.109316 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq88k\" (UniqueName: \"kubernetes.io/projected/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-kube-api-access-sq88k\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.111685 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-logs\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.116159 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.120991 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-config-data\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.122525 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.125596 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq88k\" (UniqueName: \"kubernetes.io/projected/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-kube-api-access-sq88k\") pod \"nova-metadata-0\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.216152 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.557549 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ba2a499-6032-4042-8239-9867cf3a2968","Type":"ContainerStarted","Data":"2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2"} Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.563208 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.563207 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"40fdb124-8bf9-40af-ac43-5d6c1de9a948","Type":"ContainerDied","Data":"55b949b4ad14ccfd428693a959ddb9034196046e156becc15b214ad2bc24f465"} Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.563386 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.563538 5034 scope.go:117] "RemoveContainer" containerID="32aececbfbae1f5b2913e1b1a33025ebd94532b11413e617c74ff37f88258f15" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.573882 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.659363 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.700053 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.728140 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.729891 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.735297 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.735552 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.735686 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.778150 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.803652 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.826197 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.826287 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2kj\" (UniqueName: \"kubernetes.io/projected/80869f0d-0e2c-4235-b5e0-3519e6c95ded-kube-api-access-rc2kj\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.826340 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.826416 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.826444 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.944653 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc2kj\" (UniqueName: \"kubernetes.io/projected/80869f0d-0e2c-4235-b5e0-3519e6c95ded-kube-api-access-rc2kj\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.959448 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.960044 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.960197 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.960425 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.944848 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40fdb124-8bf9-40af-ac43-5d6c1de9a948" path="/var/lib/kubelet/pods/40fdb124-8bf9-40af-ac43-5d6c1de9a948/volumes" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.970025 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67aa8cdc-bdff-4d44-a99b-4135eda9265a" path="/var/lib/kubelet/pods/67aa8cdc-bdff-4d44-a99b-4135eda9265a/volumes" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.971126 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-gcdp2"] Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.973282 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-gcdp2"] Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.973429 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.983184 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.983395 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.983450 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.984298 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.989380 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc2kj\" (UniqueName: \"kubernetes.io/projected/80869f0d-0e2c-4235-b5e0-3519e6c95ded-kube-api-access-rc2kj\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.989965 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:47 crc kubenswrapper[5034]: I0105 22:14:47.995757 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.001334 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.062975 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhscw\" (UniqueName: \"kubernetes.io/projected/92d2026b-e43c-47d5-ad78-e532a664f033-kube-api-access-zhscw\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.063034 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.063122 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-config\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.063402 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.063605 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.063715 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-svc\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.165850 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.166427 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.166488 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-svc\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.166556 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhscw\" (UniqueName: \"kubernetes.io/projected/92d2026b-e43c-47d5-ad78-e532a664f033-kube-api-access-zhscw\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.166596 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.166656 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-config\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.167692 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-config\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.172936 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.173183 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.173383 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-svc\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.174217 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.190156 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhscw\" (UniqueName: \"kubernetes.io/projected/92d2026b-e43c-47d5-ad78-e532a664f033-kube-api-access-zhscw\") pod \"dnsmasq-dns-5ddd577785-gcdp2\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.221276 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.387971 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.594180 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81","Type":"ContainerStarted","Data":"1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a"} Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.594782 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81","Type":"ContainerStarted","Data":"8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d"} Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.594820 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81","Type":"ContainerStarted","Data":"9d53307977547a7e00e04f07bdadcc4caea7c9b6ab966f5df471baa96cfe873d"} Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.608553 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ba2a499-6032-4042-8239-9867cf3a2968","Type":"ContainerStarted","Data":"887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a"} Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.638091 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.638048114 podStartE2EDuration="2.638048114s" podCreationTimestamp="2026-01-05 22:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:48.618666095 +0000 UTC m=+1380.990665534" watchObservedRunningTime="2026-01-05 22:14:48.638048114 +0000 UTC m=+1381.010047553" Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.747017 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:14:48 crc kubenswrapper[5034]: I0105 22:14:48.955387 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-gcdp2"] Jan 05 22:14:49 crc kubenswrapper[5034]: I0105 22:14:49.635112 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"80869f0d-0e2c-4235-b5e0-3519e6c95ded","Type":"ContainerStarted","Data":"b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a"} Jan 05 22:14:49 crc kubenswrapper[5034]: I0105 22:14:49.635501 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"80869f0d-0e2c-4235-b5e0-3519e6c95ded","Type":"ContainerStarted","Data":"04457f4471c2abc6e3434d2c02f34f827bc58799ec86f78be488f5871d72ba26"} Jan 05 22:14:49 crc kubenswrapper[5034]: I0105 22:14:49.649610 5034 generic.go:334] "Generic (PLEG): container finished" podID="92d2026b-e43c-47d5-ad78-e532a664f033" containerID="bc6219970b576897171a628683199691015abe779dd7aae6b57bd79340d78bfb" exitCode=0 Jan 05 22:14:49 crc kubenswrapper[5034]: I0105 22:14:49.650464 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" event={"ID":"92d2026b-e43c-47d5-ad78-e532a664f033","Type":"ContainerDied","Data":"bc6219970b576897171a628683199691015abe779dd7aae6b57bd79340d78bfb"} Jan 05 22:14:49 crc kubenswrapper[5034]: I0105 22:14:49.650510 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" event={"ID":"92d2026b-e43c-47d5-ad78-e532a664f033","Type":"ContainerStarted","Data":"fbf9c2ee57c0fbb724ab9b3f165456eeb3791c73c139fa96562f088e6552688d"} Jan 05 22:14:49 crc kubenswrapper[5034]: I0105 22:14:49.681872 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.68185036 podStartE2EDuration="2.68185036s" podCreationTimestamp="2026-01-05 22:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:49.656770548 +0000 UTC m=+1382.028769987" watchObservedRunningTime="2026-01-05 22:14:49.68185036 +0000 UTC m=+1382.053849799" Jan 05 22:14:50 crc kubenswrapper[5034]: I0105 22:14:50.642714 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:50 crc kubenswrapper[5034]: I0105 22:14:50.662970 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" event={"ID":"92d2026b-e43c-47d5-ad78-e532a664f033","Type":"ContainerStarted","Data":"20fbb538b44cc958c0860e6e3d037b5579a58d90268eae871a9788dbe68e60d7"} Jan 05 22:14:50 crc kubenswrapper[5034]: I0105 22:14:50.663288 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:50 crc kubenswrapper[5034]: I0105 22:14:50.667021 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ba2a499-6032-4042-8239-9867cf3a2968","Type":"ContainerStarted","Data":"78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2"} Jan 05 22:14:50 crc kubenswrapper[5034]: I0105 22:14:50.667492 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 22:14:50 crc kubenswrapper[5034]: I0105 22:14:50.698557 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" podStartSLOduration=3.698528305 podStartE2EDuration="3.698528305s" podCreationTimestamp="2026-01-05 22:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:50.684822786 +0000 UTC m=+1383.056822235" watchObservedRunningTime="2026-01-05 22:14:50.698528305 +0000 UTC m=+1383.070527744" Jan 05 22:14:50 crc kubenswrapper[5034]: I0105 22:14:50.714786 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.562365981 podStartE2EDuration="6.714760905s" podCreationTimestamp="2026-01-05 22:14:44 +0000 UTC" firstStartedPulling="2026-01-05 22:14:45.268618118 +0000 UTC m=+1377.640617557" lastFinishedPulling="2026-01-05 22:14:49.421013042 +0000 UTC m=+1381.793012481" observedRunningTime="2026-01-05 22:14:50.709023763 +0000 UTC m=+1383.081023202" watchObservedRunningTime="2026-01-05 22:14:50.714760905 +0000 UTC m=+1383.086760334" Jan 05 22:14:50 crc kubenswrapper[5034]: I0105 22:14:50.921539 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:50 crc kubenswrapper[5034]: I0105 22:14:50.921758 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerName="nova-api-log" containerID="cri-o://9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c" gracePeriod=30 Jan 05 22:14:50 crc kubenswrapper[5034]: I0105 22:14:50.921870 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerName="nova-api-api" containerID="cri-o://81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1" gracePeriod=30 Jan 05 22:14:51 crc kubenswrapper[5034]: I0105 22:14:51.681404 5034 generic.go:334] "Generic (PLEG): container finished" podID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerID="9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c" exitCode=143 Jan 05 22:14:51 crc kubenswrapper[5034]: I0105 22:14:51.681501 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c157dd0a-3ebf-4bb2-af9f-189d1a47af02","Type":"ContainerDied","Data":"9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c"} Jan 05 22:14:51 crc kubenswrapper[5034]: I0105 22:14:51.683007 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="proxy-httpd" containerID="cri-o://78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2" gracePeriod=30 Jan 05 22:14:51 crc kubenswrapper[5034]: I0105 22:14:51.682992 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="ceilometer-central-agent" containerID="cri-o://710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f" gracePeriod=30 Jan 05 22:14:51 crc kubenswrapper[5034]: I0105 22:14:51.683055 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="ceilometer-notification-agent" containerID="cri-o://2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2" gracePeriod=30 Jan 05 22:14:51 crc kubenswrapper[5034]: I0105 22:14:51.683134 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="sg-core" containerID="cri-o://887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a" gracePeriod=30 Jan 05 22:14:52 crc kubenswrapper[5034]: I0105 22:14:52.124698 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 05 22:14:52 crc kubenswrapper[5034]: I0105 22:14:52.219353 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 22:14:52 crc kubenswrapper[5034]: I0105 22:14:52.219909 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 22:14:52 crc kubenswrapper[5034]: I0105 22:14:52.696032 5034 generic.go:334] "Generic (PLEG): container finished" podID="0ba2a499-6032-4042-8239-9867cf3a2968" containerID="78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2" exitCode=0 Jan 05 22:14:52 crc kubenswrapper[5034]: I0105 22:14:52.696126 5034 generic.go:334] "Generic (PLEG): container finished" podID="0ba2a499-6032-4042-8239-9867cf3a2968" containerID="887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a" exitCode=2 Jan 05 22:14:52 crc kubenswrapper[5034]: I0105 22:14:52.696191 5034 generic.go:334] "Generic (PLEG): container finished" podID="0ba2a499-6032-4042-8239-9867cf3a2968" containerID="2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2" exitCode=0 Jan 05 22:14:52 crc kubenswrapper[5034]: I0105 22:14:52.696124 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ba2a499-6032-4042-8239-9867cf3a2968","Type":"ContainerDied","Data":"78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2"} Jan 05 22:14:52 crc kubenswrapper[5034]: I0105 22:14:52.696306 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ba2a499-6032-4042-8239-9867cf3a2968","Type":"ContainerDied","Data":"887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a"} Jan 05 22:14:52 crc kubenswrapper[5034]: I0105 22:14:52.696326 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ba2a499-6032-4042-8239-9867cf3a2968","Type":"ContainerDied","Data":"2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2"} Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.231024 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.372147 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.489981 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-sg-core-conf-yaml\") pod \"0ba2a499-6032-4042-8239-9867cf3a2968\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.490045 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-scripts\") pod \"0ba2a499-6032-4042-8239-9867cf3a2968\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.490181 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-log-httpd\") pod \"0ba2a499-6032-4042-8239-9867cf3a2968\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.490252 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-ceilometer-tls-certs\") pod \"0ba2a499-6032-4042-8239-9867cf3a2968\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.490323 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-run-httpd\") pod \"0ba2a499-6032-4042-8239-9867cf3a2968\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.490343 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng82h\" (UniqueName: \"kubernetes.io/projected/0ba2a499-6032-4042-8239-9867cf3a2968-kube-api-access-ng82h\") pod \"0ba2a499-6032-4042-8239-9867cf3a2968\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.490380 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-combined-ca-bundle\") pod \"0ba2a499-6032-4042-8239-9867cf3a2968\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.490420 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-config-data\") pod \"0ba2a499-6032-4042-8239-9867cf3a2968\" (UID: \"0ba2a499-6032-4042-8239-9867cf3a2968\") " Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.491011 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ba2a499-6032-4042-8239-9867cf3a2968" (UID: "0ba2a499-6032-4042-8239-9867cf3a2968"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.491330 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ba2a499-6032-4042-8239-9867cf3a2968" (UID: "0ba2a499-6032-4042-8239-9867cf3a2968"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.498379 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-scripts" (OuterVolumeSpecName: "scripts") pod "0ba2a499-6032-4042-8239-9867cf3a2968" (UID: "0ba2a499-6032-4042-8239-9867cf3a2968"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.499795 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba2a499-6032-4042-8239-9867cf3a2968-kube-api-access-ng82h" (OuterVolumeSpecName: "kube-api-access-ng82h") pod "0ba2a499-6032-4042-8239-9867cf3a2968" (UID: "0ba2a499-6032-4042-8239-9867cf3a2968"). InnerVolumeSpecName "kube-api-access-ng82h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.527861 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ba2a499-6032-4042-8239-9867cf3a2968" (UID: "0ba2a499-6032-4042-8239-9867cf3a2968"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.592913 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.592952 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng82h\" (UniqueName: \"kubernetes.io/projected/0ba2a499-6032-4042-8239-9867cf3a2968-kube-api-access-ng82h\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.592966 5034 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.592977 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.592987 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ba2a499-6032-4042-8239-9867cf3a2968-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.597389 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0ba2a499-6032-4042-8239-9867cf3a2968" (UID: "0ba2a499-6032-4042-8239-9867cf3a2968"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.615657 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ba2a499-6032-4042-8239-9867cf3a2968" (UID: "0ba2a499-6032-4042-8239-9867cf3a2968"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.625056 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-config-data" (OuterVolumeSpecName: "config-data") pod "0ba2a499-6032-4042-8239-9867cf3a2968" (UID: "0ba2a499-6032-4042-8239-9867cf3a2968"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.694753 5034 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.694797 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.694814 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba2a499-6032-4042-8239-9867cf3a2968-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.708392 5034 generic.go:334] "Generic (PLEG): container finished" podID="0ba2a499-6032-4042-8239-9867cf3a2968" containerID="710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f" exitCode=0 Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.708468 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.708490 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ba2a499-6032-4042-8239-9867cf3a2968","Type":"ContainerDied","Data":"710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f"} Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.708558 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ba2a499-6032-4042-8239-9867cf3a2968","Type":"ContainerDied","Data":"3abcca521597427f05d86169a4169b15287af0be475ef8554ce426c7f24dcd3e"} Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.708587 5034 scope.go:117] "RemoveContainer" containerID="78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.727725 5034 scope.go:117] "RemoveContainer" containerID="887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.749569 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.762836 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.764502 5034 scope.go:117] "RemoveContainer" containerID="2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.774238 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:53 crc kubenswrapper[5034]: E0105 22:14:53.774693 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="ceilometer-central-agent" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.774711 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="ceilometer-central-agent" Jan 05 22:14:53 crc kubenswrapper[5034]: E0105 22:14:53.774741 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="sg-core" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.774748 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="sg-core" Jan 05 22:14:53 crc kubenswrapper[5034]: E0105 22:14:53.774758 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="proxy-httpd" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.774765 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="proxy-httpd" Jan 05 22:14:53 crc kubenswrapper[5034]: E0105 22:14:53.774780 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="ceilometer-notification-agent" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.774786 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="ceilometer-notification-agent" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.775025 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="ceilometer-central-agent" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.775046 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="proxy-httpd" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.775058 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="sg-core" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.775087 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" containerName="ceilometer-notification-agent" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.777705 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.780701 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.780961 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.786199 5034 scope.go:117] "RemoveContainer" containerID="710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.787726 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.790355 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.796872 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7kq8\" (UniqueName: \"kubernetes.io/projected/71dab1f9-0430-4516-8eed-265cfd0c5be9-kube-api-access-t7kq8\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.797065 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.797282 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-scripts\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.797450 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-log-httpd\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.797592 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-run-httpd\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.797705 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.797837 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.797951 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-config-data\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.813373 5034 scope.go:117] "RemoveContainer" containerID="78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2" Jan 05 22:14:53 crc kubenswrapper[5034]: E0105 22:14:53.813845 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2\": container with ID starting with 78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2 not found: ID does not exist" containerID="78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.813887 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2"} err="failed to get container status \"78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2\": rpc error: code = NotFound desc = could not find container \"78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2\": container with ID starting with 78c22eed21f91282bf08e4bf66085b90fa84f00ed54fac46e61e9def5fd4aaf2 not found: ID does not exist" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.813919 5034 scope.go:117] "RemoveContainer" containerID="887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a" Jan 05 22:14:53 crc kubenswrapper[5034]: E0105 22:14:53.814426 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a\": container with ID starting with 887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a not found: ID does not exist" containerID="887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.814453 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a"} err="failed to get container status \"887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a\": rpc error: code = NotFound desc = could not find container \"887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a\": container with ID starting with 887434ec5c42c6b0a73e81f44c7be7d5d9a0c5bf96a7f84503bef9f02d2ed71a not found: ID does not exist" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.814473 5034 scope.go:117] "RemoveContainer" containerID="2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2" Jan 05 22:14:53 crc kubenswrapper[5034]: E0105 22:14:53.814988 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2\": container with ID starting with 2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2 not found: ID does not exist" containerID="2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.815037 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2"} err="failed to get container status \"2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2\": rpc error: code = NotFound desc = could not find container \"2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2\": container with ID starting with 2003282cf2c01d2ba5e86d298fdaad9364dd313b80d9d86996f5ada078c123b2 not found: ID does not exist" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.815067 5034 scope.go:117] "RemoveContainer" containerID="710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f" Jan 05 22:14:53 crc kubenswrapper[5034]: E0105 22:14:53.816223 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f\": container with ID starting with 710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f not found: ID does not exist" containerID="710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.816248 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f"} err="failed to get container status \"710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f\": rpc error: code = NotFound desc = could not find container \"710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f\": container with ID starting with 710035498c47dd4207cda6e0c0c56ff700fdc60d2c20e00e5a0ac94cb13c9e6f not found: ID does not exist" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.853019 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba2a499-6032-4042-8239-9867cf3a2968" path="/var/lib/kubelet/pods/0ba2a499-6032-4042-8239-9867cf3a2968/volumes" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.899654 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-scripts\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.899722 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-log-httpd\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.899748 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-run-httpd\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.899764 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.899800 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.899822 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-config-data\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.899874 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7kq8\" (UniqueName: \"kubernetes.io/projected/71dab1f9-0430-4516-8eed-265cfd0c5be9-kube-api-access-t7kq8\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.900517 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-run-httpd\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.900727 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.901304 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-log-httpd\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.904413 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.905214 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.906228 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.906367 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-scripts\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.906897 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-config-data\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:53 crc kubenswrapper[5034]: I0105 22:14:53.917857 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7kq8\" (UniqueName: \"kubernetes.io/projected/71dab1f9-0430-4516-8eed-265cfd0c5be9-kube-api-access-t7kq8\") pod \"ceilometer-0\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " pod="openstack/ceilometer-0" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.104674 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.543290 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.614142 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-logs\") pod \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.614301 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htvtw\" (UniqueName: \"kubernetes.io/projected/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-kube-api-access-htvtw\") pod \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.614577 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-config-data\") pod \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.614630 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-combined-ca-bundle\") pod \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\" (UID: \"c157dd0a-3ebf-4bb2-af9f-189d1a47af02\") " Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.618029 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-logs" (OuterVolumeSpecName: "logs") pod "c157dd0a-3ebf-4bb2-af9f-189d1a47af02" (UID: "c157dd0a-3ebf-4bb2-af9f-189d1a47af02"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.635544 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-kube-api-access-htvtw" (OuterVolumeSpecName: "kube-api-access-htvtw") pod "c157dd0a-3ebf-4bb2-af9f-189d1a47af02" (UID: "c157dd0a-3ebf-4bb2-af9f-189d1a47af02"). InnerVolumeSpecName "kube-api-access-htvtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.669006 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.670875 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-config-data" (OuterVolumeSpecName: "config-data") pod "c157dd0a-3ebf-4bb2-af9f-189d1a47af02" (UID: "c157dd0a-3ebf-4bb2-af9f-189d1a47af02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:54 crc kubenswrapper[5034]: W0105 22:14:54.671293 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71dab1f9_0430_4516_8eed_265cfd0c5be9.slice/crio-5198f02be5d8ed373f011589bd807db2be8768560abb4c4829f1b65d46854e19 WatchSource:0}: Error finding container 5198f02be5d8ed373f011589bd807db2be8768560abb4c4829f1b65d46854e19: Status 404 returned error can't find the container with id 5198f02be5d8ed373f011589bd807db2be8768560abb4c4829f1b65d46854e19 Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.675984 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c157dd0a-3ebf-4bb2-af9f-189d1a47af02" (UID: "c157dd0a-3ebf-4bb2-af9f-189d1a47af02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.719984 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htvtw\" (UniqueName: \"kubernetes.io/projected/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-kube-api-access-htvtw\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.720055 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.720068 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.720334 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c157dd0a-3ebf-4bb2-af9f-189d1a47af02-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.726463 5034 generic.go:334] "Generic (PLEG): container finished" podID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerID="81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1" exitCode=0 Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.726552 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.726690 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c157dd0a-3ebf-4bb2-af9f-189d1a47af02","Type":"ContainerDied","Data":"81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1"} Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.726763 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c157dd0a-3ebf-4bb2-af9f-189d1a47af02","Type":"ContainerDied","Data":"f61381f8218e2d740364289e7f8cb40606ace41b192c06cb8a74c06fb1271c4f"} Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.726783 5034 scope.go:117] "RemoveContainer" containerID="81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.734551 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71dab1f9-0430-4516-8eed-265cfd0c5be9","Type":"ContainerStarted","Data":"5198f02be5d8ed373f011589bd807db2be8768560abb4c4829f1b65d46854e19"} Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.799510 5034 scope.go:117] "RemoveContainer" containerID="9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.806008 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.828885 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.837066 5034 scope.go:117] "RemoveContainer" containerID="81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1" Jan 05 22:14:54 crc kubenswrapper[5034]: E0105 22:14:54.837615 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1\": container with ID starting with 81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1 not found: ID does not exist" containerID="81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.837654 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1"} err="failed to get container status \"81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1\": rpc error: code = NotFound desc = could not find container \"81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1\": container with ID starting with 81d01d986c3c83e35ae0d777ba9bce5801f59e058ed535c5e043dcc5b3bf6da1 not found: ID does not exist" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.837680 5034 scope.go:117] "RemoveContainer" containerID="9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c" Jan 05 22:14:54 crc kubenswrapper[5034]: E0105 22:14:54.839806 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c\": container with ID starting with 9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c not found: ID does not exist" containerID="9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.839837 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c"} err="failed to get container status \"9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c\": rpc error: code = NotFound desc = could not find container \"9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c\": container with ID starting with 9e04911176af726cfa9f79f4507eaa117e8a646aa4ae88b47702f1a4dda8e31c not found: ID does not exist" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.844176 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:54 crc kubenswrapper[5034]: E0105 22:14:54.845232 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerName="nova-api-log" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.845269 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerName="nova-api-log" Jan 05 22:14:54 crc kubenswrapper[5034]: E0105 22:14:54.845281 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerName="nova-api-api" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.845287 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerName="nova-api-api" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.845581 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerName="nova-api-log" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.845596 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" containerName="nova-api-api" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.849912 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.855655 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.856133 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.856805 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.863380 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.925465 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-internal-tls-certs\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.926592 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-public-tls-certs\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.927158 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/356230cc-9dc5-4134-b562-ed2c7bdef752-logs\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.927638 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrvg\" (UniqueName: \"kubernetes.io/projected/356230cc-9dc5-4134-b562-ed2c7bdef752-kube-api-access-nxrvg\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.927756 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-config-data\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:54 crc kubenswrapper[5034]: I0105 22:14:54.928320 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.030944 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxrvg\" (UniqueName: \"kubernetes.io/projected/356230cc-9dc5-4134-b562-ed2c7bdef752-kube-api-access-nxrvg\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.031023 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-config-data\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.032102 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.032168 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-internal-tls-certs\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.032223 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-public-tls-certs\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.032272 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/356230cc-9dc5-4134-b562-ed2c7bdef752-logs\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.032906 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/356230cc-9dc5-4134-b562-ed2c7bdef752-logs\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.035660 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-config-data\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.036282 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-internal-tls-certs\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.036361 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.036793 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-public-tls-certs\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.053119 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxrvg\" (UniqueName: \"kubernetes.io/projected/356230cc-9dc5-4134-b562-ed2c7bdef752-kube-api-access-nxrvg\") pod \"nova-api-0\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.184273 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.664887 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.745875 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71dab1f9-0430-4516-8eed-265cfd0c5be9","Type":"ContainerStarted","Data":"f1d141763da55b0e46082e6afd0859a1080cd136929e067948b616a2530eac31"} Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.755423 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"356230cc-9dc5-4134-b562-ed2c7bdef752","Type":"ContainerStarted","Data":"798af1a1df11e16d10c9e155df45dd2b98a75c00c5aef1e6a17c8594ef9265c5"} Jan 05 22:14:55 crc kubenswrapper[5034]: I0105 22:14:55.854749 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c157dd0a-3ebf-4bb2-af9f-189d1a47af02" path="/var/lib/kubelet/pods/c157dd0a-3ebf-4bb2-af9f-189d1a47af02/volumes" Jan 05 22:14:56 crc kubenswrapper[5034]: I0105 22:14:56.787641 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"356230cc-9dc5-4134-b562-ed2c7bdef752","Type":"ContainerStarted","Data":"47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c"} Jan 05 22:14:56 crc kubenswrapper[5034]: I0105 22:14:56.788727 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"356230cc-9dc5-4134-b562-ed2c7bdef752","Type":"ContainerStarted","Data":"a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68"} Jan 05 22:14:56 crc kubenswrapper[5034]: I0105 22:14:56.791925 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71dab1f9-0430-4516-8eed-265cfd0c5be9","Type":"ContainerStarted","Data":"1d7b03a04a230b552aaf243bbc2885e5f698b8f260c1a4b1505ee39ac4fe636a"} Jan 05 22:14:56 crc kubenswrapper[5034]: I0105 22:14:56.816023 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8160015449999998 podStartE2EDuration="2.816001545s" podCreationTimestamp="2026-01-05 22:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:14:56.808660876 +0000 UTC m=+1389.180660315" watchObservedRunningTime="2026-01-05 22:14:56.816001545 +0000 UTC m=+1389.188000984" Jan 05 22:14:57 crc kubenswrapper[5034]: I0105 22:14:57.217206 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 22:14:57 crc kubenswrapper[5034]: I0105 22:14:57.217292 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 22:14:57 crc kubenswrapper[5034]: I0105 22:14:57.835253 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71dab1f9-0430-4516-8eed-265cfd0c5be9","Type":"ContainerStarted","Data":"3137688b20d8b842cd8f9e85cf05a9851f287206fadca80a9fae4c2672d55f96"} Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.221909 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.230374 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.230449 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.317655 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.389296 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.450712 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ccvsh"] Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.451430 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" podUID="8de19bd3-39c9-47d0-bb6c-4bd536d54611" containerName="dnsmasq-dns" containerID="cri-o://e091e99980c0aae7954373c565cf09ddcf6970c6038cdd6df72478e3465b3830" gracePeriod=10 Jan 05 22:14:58 crc kubenswrapper[5034]: E0105 22:14:58.538233 5034 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de19bd3_39c9_47d0_bb6c_4bd536d54611.slice/crio-e091e99980c0aae7954373c565cf09ddcf6970c6038cdd6df72478e3465b3830.scope\": RecentStats: unable to find data in memory cache]" Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.887387 5034 generic.go:334] "Generic (PLEG): container finished" podID="8de19bd3-39c9-47d0-bb6c-4bd536d54611" containerID="e091e99980c0aae7954373c565cf09ddcf6970c6038cdd6df72478e3465b3830" exitCode=0 Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.887465 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" event={"ID":"8de19bd3-39c9-47d0-bb6c-4bd536d54611","Type":"ContainerDied","Data":"e091e99980c0aae7954373c565cf09ddcf6970c6038cdd6df72478e3465b3830"} Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.893159 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71dab1f9-0430-4516-8eed-265cfd0c5be9","Type":"ContainerStarted","Data":"9e1dc00f3e2493bdf5a7760277688af58c81bbb233be802e5fd2fc891a6f89a3"} Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.893528 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.919068 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.775893204 podStartE2EDuration="5.919037402s" podCreationTimestamp="2026-01-05 22:14:53 +0000 UTC" firstStartedPulling="2026-01-05 22:14:54.679483476 +0000 UTC m=+1387.051482915" lastFinishedPulling="2026-01-05 22:14:57.822627674 +0000 UTC m=+1390.194627113" observedRunningTime="2026-01-05 22:14:58.913561857 +0000 UTC m=+1391.285561306" watchObservedRunningTime="2026-01-05 22:14:58.919037402 +0000 UTC m=+1391.291036841" Jan 05 22:14:58 crc kubenswrapper[5034]: I0105 22:14:58.927465 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.121622 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-l9mfg"] Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.147831 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.157131 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.161272 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.161980 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.173795 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9mfg"] Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.261246 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-config\") pod \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.261352 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-swift-storage-0\") pod \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.261387 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-nb\") pod \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.261407 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-svc\") pod \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.261488 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-sb\") pod \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.261509 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf4pb\" (UniqueName: \"kubernetes.io/projected/8de19bd3-39c9-47d0-bb6c-4bd536d54611-kube-api-access-hf4pb\") pod \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\" (UID: \"8de19bd3-39c9-47d0-bb6c-4bd536d54611\") " Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.261842 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-scripts\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.261954 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q422\" (UniqueName: \"kubernetes.io/projected/2dfdf242-b445-4031-948d-96047f780bc5-kube-api-access-9q422\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.261994 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.262024 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-config-data\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.269282 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de19bd3-39c9-47d0-bb6c-4bd536d54611-kube-api-access-hf4pb" (OuterVolumeSpecName: "kube-api-access-hf4pb") pod "8de19bd3-39c9-47d0-bb6c-4bd536d54611" (UID: "8de19bd3-39c9-47d0-bb6c-4bd536d54611"). InnerVolumeSpecName "kube-api-access-hf4pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.323816 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-config" (OuterVolumeSpecName: "config") pod "8de19bd3-39c9-47d0-bb6c-4bd536d54611" (UID: "8de19bd3-39c9-47d0-bb6c-4bd536d54611"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.330388 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8de19bd3-39c9-47d0-bb6c-4bd536d54611" (UID: "8de19bd3-39c9-47d0-bb6c-4bd536d54611"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.337870 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8de19bd3-39c9-47d0-bb6c-4bd536d54611" (UID: "8de19bd3-39c9-47d0-bb6c-4bd536d54611"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.346547 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8de19bd3-39c9-47d0-bb6c-4bd536d54611" (UID: "8de19bd3-39c9-47d0-bb6c-4bd536d54611"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.361166 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8de19bd3-39c9-47d0-bb6c-4bd536d54611" (UID: "8de19bd3-39c9-47d0-bb6c-4bd536d54611"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.364950 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q422\" (UniqueName: \"kubernetes.io/projected/2dfdf242-b445-4031-948d-96047f780bc5-kube-api-access-9q422\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.365015 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.365040 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-config-data\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.365133 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-scripts\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.365242 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.365448 5034 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.365545 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.365560 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.365569 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8de19bd3-39c9-47d0-bb6c-4bd536d54611-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.365580 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf4pb\" (UniqueName: \"kubernetes.io/projected/8de19bd3-39c9-47d0-bb6c-4bd536d54611-kube-api-access-hf4pb\") on node \"crc\" DevicePath \"\"" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.369792 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-config-data\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.370464 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-scripts\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.372324 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.386432 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q422\" (UniqueName: \"kubernetes.io/projected/2dfdf242-b445-4031-948d-96047f780bc5-kube-api-access-9q422\") pod \"nova-cell1-cell-mapping-l9mfg\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.483362 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.905926 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" event={"ID":"8de19bd3-39c9-47d0-bb6c-4bd536d54611","Type":"ContainerDied","Data":"8df1f628499a66cdfb5b815a4ae03ff3fea633b2cb2063fa50a9eff9192a42bf"} Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.906389 5034 scope.go:117] "RemoveContainer" containerID="e091e99980c0aae7954373c565cf09ddcf6970c6038cdd6df72478e3465b3830" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.906004 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-ccvsh" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.938993 5034 scope.go:117] "RemoveContainer" containerID="41a7d2da4e0c3dbd8ff0015cadc1c9dfc7a7b5f6255e62c869d780dc99f63b99" Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.947821 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ccvsh"] Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.958452 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ccvsh"] Jan 05 22:14:59 crc kubenswrapper[5034]: I0105 22:14:59.983777 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9mfg"] Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.141698 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8"] Jan 05 22:15:00 crc kubenswrapper[5034]: E0105 22:15:00.142453 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de19bd3-39c9-47d0-bb6c-4bd536d54611" containerName="init" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.142478 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de19bd3-39c9-47d0-bb6c-4bd536d54611" containerName="init" Jan 05 22:15:00 crc kubenswrapper[5034]: E0105 22:15:00.142528 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de19bd3-39c9-47d0-bb6c-4bd536d54611" containerName="dnsmasq-dns" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.142538 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de19bd3-39c9-47d0-bb6c-4bd536d54611" containerName="dnsmasq-dns" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.142959 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de19bd3-39c9-47d0-bb6c-4bd536d54611" containerName="dnsmasq-dns" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.144101 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.148188 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.149965 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.173803 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8"] Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.290950 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afbe1e07-4332-455a-b66c-57b800a25825-secret-volume\") pod \"collect-profiles-29460855-hxqr8\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.291099 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4t4k\" (UniqueName: \"kubernetes.io/projected/afbe1e07-4332-455a-b66c-57b800a25825-kube-api-access-t4t4k\") pod \"collect-profiles-29460855-hxqr8\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.291195 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afbe1e07-4332-455a-b66c-57b800a25825-config-volume\") pod \"collect-profiles-29460855-hxqr8\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.393145 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afbe1e07-4332-455a-b66c-57b800a25825-secret-volume\") pod \"collect-profiles-29460855-hxqr8\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.393284 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4t4k\" (UniqueName: \"kubernetes.io/projected/afbe1e07-4332-455a-b66c-57b800a25825-kube-api-access-t4t4k\") pod \"collect-profiles-29460855-hxqr8\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.393407 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afbe1e07-4332-455a-b66c-57b800a25825-config-volume\") pod \"collect-profiles-29460855-hxqr8\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.394382 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afbe1e07-4332-455a-b66c-57b800a25825-config-volume\") pod \"collect-profiles-29460855-hxqr8\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.405422 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afbe1e07-4332-455a-b66c-57b800a25825-secret-volume\") pod \"collect-profiles-29460855-hxqr8\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.419013 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4t4k\" (UniqueName: \"kubernetes.io/projected/afbe1e07-4332-455a-b66c-57b800a25825-kube-api-access-t4t4k\") pod \"collect-profiles-29460855-hxqr8\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.491917 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.831264 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8"] Jan 05 22:15:00 crc kubenswrapper[5034]: W0105 22:15:00.833622 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafbe1e07_4332_455a_b66c_57b800a25825.slice/crio-6dec8d02ec518bfb66db30321a29945c6e9001157bbc16272c39e1df614dd188 WatchSource:0}: Error finding container 6dec8d02ec518bfb66db30321a29945c6e9001157bbc16272c39e1df614dd188: Status 404 returned error can't find the container with id 6dec8d02ec518bfb66db30321a29945c6e9001157bbc16272c39e1df614dd188 Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.921246 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9mfg" event={"ID":"2dfdf242-b445-4031-948d-96047f780bc5","Type":"ContainerStarted","Data":"3a57024fa88c30e7a57854636d86d3229e9fe60d4e42969754544f5388f31269"} Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.921724 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9mfg" event={"ID":"2dfdf242-b445-4031-948d-96047f780bc5","Type":"ContainerStarted","Data":"aacb86e68cf1b89b37a47ba6fe4369aefc478e2686cc214171c5c7c1fe4c1a8b"} Jan 05 22:15:00 crc kubenswrapper[5034]: I0105 22:15:00.931509 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" event={"ID":"afbe1e07-4332-455a-b66c-57b800a25825","Type":"ContainerStarted","Data":"6dec8d02ec518bfb66db30321a29945c6e9001157bbc16272c39e1df614dd188"} Jan 05 22:15:01 crc kubenswrapper[5034]: I0105 22:15:01.850755 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de19bd3-39c9-47d0-bb6c-4bd536d54611" path="/var/lib/kubelet/pods/8de19bd3-39c9-47d0-bb6c-4bd536d54611/volumes" Jan 05 22:15:01 crc kubenswrapper[5034]: I0105 22:15:01.944122 5034 generic.go:334] "Generic (PLEG): container finished" podID="afbe1e07-4332-455a-b66c-57b800a25825" containerID="e8b18c34342ca809c0d77ef65c97ca1677af7ff8ffd24416c1c82fcf7970aa0b" exitCode=0 Jan 05 22:15:01 crc kubenswrapper[5034]: I0105 22:15:01.945544 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" event={"ID":"afbe1e07-4332-455a-b66c-57b800a25825","Type":"ContainerDied","Data":"e8b18c34342ca809c0d77ef65c97ca1677af7ff8ffd24416c1c82fcf7970aa0b"} Jan 05 22:15:01 crc kubenswrapper[5034]: I0105 22:15:01.970540 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-l9mfg" podStartSLOduration=2.970521421 podStartE2EDuration="2.970521421s" podCreationTimestamp="2026-01-05 22:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:15:00.957577132 +0000 UTC m=+1393.329576571" watchObservedRunningTime="2026-01-05 22:15:01.970521421 +0000 UTC m=+1394.342520850" Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.354720 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.457470 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afbe1e07-4332-455a-b66c-57b800a25825-config-volume\") pod \"afbe1e07-4332-455a-b66c-57b800a25825\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.457670 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4t4k\" (UniqueName: \"kubernetes.io/projected/afbe1e07-4332-455a-b66c-57b800a25825-kube-api-access-t4t4k\") pod \"afbe1e07-4332-455a-b66c-57b800a25825\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.458332 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbe1e07-4332-455a-b66c-57b800a25825-config-volume" (OuterVolumeSpecName: "config-volume") pod "afbe1e07-4332-455a-b66c-57b800a25825" (UID: "afbe1e07-4332-455a-b66c-57b800a25825"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.458752 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afbe1e07-4332-455a-b66c-57b800a25825-secret-volume\") pod \"afbe1e07-4332-455a-b66c-57b800a25825\" (UID: \"afbe1e07-4332-455a-b66c-57b800a25825\") " Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.459187 5034 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afbe1e07-4332-455a-b66c-57b800a25825-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.464280 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbe1e07-4332-455a-b66c-57b800a25825-kube-api-access-t4t4k" (OuterVolumeSpecName: "kube-api-access-t4t4k") pod "afbe1e07-4332-455a-b66c-57b800a25825" (UID: "afbe1e07-4332-455a-b66c-57b800a25825"). InnerVolumeSpecName "kube-api-access-t4t4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.466690 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbe1e07-4332-455a-b66c-57b800a25825-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "afbe1e07-4332-455a-b66c-57b800a25825" (UID: "afbe1e07-4332-455a-b66c-57b800a25825"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.562019 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4t4k\" (UniqueName: \"kubernetes.io/projected/afbe1e07-4332-455a-b66c-57b800a25825-kube-api-access-t4t4k\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.562048 5034 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afbe1e07-4332-455a-b66c-57b800a25825-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.964431 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" event={"ID":"afbe1e07-4332-455a-b66c-57b800a25825","Type":"ContainerDied","Data":"6dec8d02ec518bfb66db30321a29945c6e9001157bbc16272c39e1df614dd188"} Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.964484 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dec8d02ec518bfb66db30321a29945c6e9001157bbc16272c39e1df614dd188" Jan 05 22:15:03 crc kubenswrapper[5034]: I0105 22:15:03.965055 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8" Jan 05 22:15:05 crc kubenswrapper[5034]: I0105 22:15:05.185245 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 22:15:05 crc kubenswrapper[5034]: I0105 22:15:05.187459 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 22:15:05 crc kubenswrapper[5034]: I0105 22:15:05.983520 5034 generic.go:334] "Generic (PLEG): container finished" podID="2dfdf242-b445-4031-948d-96047f780bc5" containerID="3a57024fa88c30e7a57854636d86d3229e9fe60d4e42969754544f5388f31269" exitCode=0 Jan 05 22:15:05 crc kubenswrapper[5034]: I0105 22:15:05.983594 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9mfg" event={"ID":"2dfdf242-b445-4031-948d-96047f780bc5","Type":"ContainerDied","Data":"3a57024fa88c30e7a57854636d86d3229e9fe60d4e42969754544f5388f31269"} Jan 05 22:15:06 crc kubenswrapper[5034]: I0105 22:15:06.207283 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 22:15:06 crc kubenswrapper[5034]: I0105 22:15:06.207296 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.222333 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.224018 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.228423 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.359950 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.453257 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-config-data\") pod \"2dfdf242-b445-4031-948d-96047f780bc5\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.453448 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-scripts\") pod \"2dfdf242-b445-4031-948d-96047f780bc5\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.453547 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-combined-ca-bundle\") pod \"2dfdf242-b445-4031-948d-96047f780bc5\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.453612 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q422\" (UniqueName: \"kubernetes.io/projected/2dfdf242-b445-4031-948d-96047f780bc5-kube-api-access-9q422\") pod \"2dfdf242-b445-4031-948d-96047f780bc5\" (UID: \"2dfdf242-b445-4031-948d-96047f780bc5\") " Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.460258 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-scripts" (OuterVolumeSpecName: "scripts") pod "2dfdf242-b445-4031-948d-96047f780bc5" (UID: "2dfdf242-b445-4031-948d-96047f780bc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.461062 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dfdf242-b445-4031-948d-96047f780bc5-kube-api-access-9q422" (OuterVolumeSpecName: "kube-api-access-9q422") pod "2dfdf242-b445-4031-948d-96047f780bc5" (UID: "2dfdf242-b445-4031-948d-96047f780bc5"). InnerVolumeSpecName "kube-api-access-9q422". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.485636 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dfdf242-b445-4031-948d-96047f780bc5" (UID: "2dfdf242-b445-4031-948d-96047f780bc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.488606 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-config-data" (OuterVolumeSpecName: "config-data") pod "2dfdf242-b445-4031-948d-96047f780bc5" (UID: "2dfdf242-b445-4031-948d-96047f780bc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.557896 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.557958 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q422\" (UniqueName: \"kubernetes.io/projected/2dfdf242-b445-4031-948d-96047f780bc5-kube-api-access-9q422\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.557983 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:07 crc kubenswrapper[5034]: I0105 22:15:07.558004 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfdf242-b445-4031-948d-96047f780bc5-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:08 crc kubenswrapper[5034]: I0105 22:15:08.003696 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9mfg" event={"ID":"2dfdf242-b445-4031-948d-96047f780bc5","Type":"ContainerDied","Data":"aacb86e68cf1b89b37a47ba6fe4369aefc478e2686cc214171c5c7c1fe4c1a8b"} Jan 05 22:15:08 crc kubenswrapper[5034]: I0105 22:15:08.003738 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aacb86e68cf1b89b37a47ba6fe4369aefc478e2686cc214171c5c7c1fe4c1a8b" Jan 05 22:15:08 crc kubenswrapper[5034]: I0105 22:15:08.003779 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9mfg" Jan 05 22:15:08 crc kubenswrapper[5034]: I0105 22:15:08.011884 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 22:15:08 crc kubenswrapper[5034]: I0105 22:15:08.213458 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:15:08 crc kubenswrapper[5034]: I0105 22:15:08.213948 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c3cc7b05-b609-4653-bf64-051aa3e11519" containerName="nova-scheduler-scheduler" containerID="cri-o://4203de1c272e07322c3cb2fb23ab8e191fcd3c0e7992de217064a0186d50eb84" gracePeriod=30 Jan 05 22:15:08 crc kubenswrapper[5034]: I0105 22:15:08.236024 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:15:08 crc kubenswrapper[5034]: I0105 22:15:08.236352 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerName="nova-api-log" containerID="cri-o://a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68" gracePeriod=30 Jan 05 22:15:08 crc kubenswrapper[5034]: I0105 22:15:08.236849 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerName="nova-api-api" containerID="cri-o://47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c" gracePeriod=30 Jan 05 22:15:08 crc kubenswrapper[5034]: I0105 22:15:08.254488 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:15:09 crc kubenswrapper[5034]: I0105 22:15:09.014286 5034 generic.go:334] "Generic (PLEG): container finished" podID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerID="a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68" exitCode=143 Jan 05 22:15:09 crc kubenswrapper[5034]: I0105 22:15:09.014375 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"356230cc-9dc5-4134-b562-ed2c7bdef752","Type":"ContainerDied","Data":"a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68"} Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.023959 5034 generic.go:334] "Generic (PLEG): container finished" podID="c3cc7b05-b609-4653-bf64-051aa3e11519" containerID="4203de1c272e07322c3cb2fb23ab8e191fcd3c0e7992de217064a0186d50eb84" exitCode=0 Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.024053 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c3cc7b05-b609-4653-bf64-051aa3e11519","Type":"ContainerDied","Data":"4203de1c272e07322c3cb2fb23ab8e191fcd3c0e7992de217064a0186d50eb84"} Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.024134 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c3cc7b05-b609-4653-bf64-051aa3e11519","Type":"ContainerDied","Data":"1aaf51d807934681fd5c94f6b8fbefc95c096cf0cad6594539eb259893337fbf"} Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.024149 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aaf51d807934681fd5c94f6b8fbefc95c096cf0cad6594539eb259893337fbf" Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.024204 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-log" containerID="cri-o://8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d" gracePeriod=30 Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.024302 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-metadata" containerID="cri-o://1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a" gracePeriod=30 Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.097874 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.212440 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-combined-ca-bundle\") pod \"c3cc7b05-b609-4653-bf64-051aa3e11519\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.212706 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-config-data\") pod \"c3cc7b05-b609-4653-bf64-051aa3e11519\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.212836 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6xxc\" (UniqueName: \"kubernetes.io/projected/c3cc7b05-b609-4653-bf64-051aa3e11519-kube-api-access-q6xxc\") pod \"c3cc7b05-b609-4653-bf64-051aa3e11519\" (UID: \"c3cc7b05-b609-4653-bf64-051aa3e11519\") " Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.218544 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3cc7b05-b609-4653-bf64-051aa3e11519-kube-api-access-q6xxc" (OuterVolumeSpecName: "kube-api-access-q6xxc") pod "c3cc7b05-b609-4653-bf64-051aa3e11519" (UID: "c3cc7b05-b609-4653-bf64-051aa3e11519"). InnerVolumeSpecName "kube-api-access-q6xxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.243853 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3cc7b05-b609-4653-bf64-051aa3e11519" (UID: "c3cc7b05-b609-4653-bf64-051aa3e11519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.246115 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-config-data" (OuterVolumeSpecName: "config-data") pod "c3cc7b05-b609-4653-bf64-051aa3e11519" (UID: "c3cc7b05-b609-4653-bf64-051aa3e11519"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.314990 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.315033 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6xxc\" (UniqueName: \"kubernetes.io/projected/c3cc7b05-b609-4653-bf64-051aa3e11519-kube-api-access-q6xxc\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:10 crc kubenswrapper[5034]: I0105 22:15:10.315048 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3cc7b05-b609-4653-bf64-051aa3e11519-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.036712 5034 generic.go:334] "Generic (PLEG): container finished" podID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerID="8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d" exitCode=143 Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.036767 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81","Type":"ContainerDied","Data":"8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d"} Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.036797 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.066766 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.074731 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.095997 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:15:11 crc kubenswrapper[5034]: E0105 22:15:11.096501 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfdf242-b445-4031-948d-96047f780bc5" containerName="nova-manage" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.096523 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfdf242-b445-4031-948d-96047f780bc5" containerName="nova-manage" Jan 05 22:15:11 crc kubenswrapper[5034]: E0105 22:15:11.096549 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3cc7b05-b609-4653-bf64-051aa3e11519" containerName="nova-scheduler-scheduler" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.096556 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3cc7b05-b609-4653-bf64-051aa3e11519" containerName="nova-scheduler-scheduler" Jan 05 22:15:11 crc kubenswrapper[5034]: E0105 22:15:11.096566 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbe1e07-4332-455a-b66c-57b800a25825" containerName="collect-profiles" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.096574 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbe1e07-4332-455a-b66c-57b800a25825" containerName="collect-profiles" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.096895 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfdf242-b445-4031-948d-96047f780bc5" containerName="nova-manage" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.096927 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3cc7b05-b609-4653-bf64-051aa3e11519" containerName="nova-scheduler-scheduler" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.096942 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbe1e07-4332-455a-b66c-57b800a25825" containerName="collect-profiles" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.097886 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.100749 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.130515 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.205966 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.206164 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-config-data\") pod \"nova-scheduler-0\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.206296 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6fcg\" (UniqueName: \"kubernetes.io/projected/033973ad-b5ce-4136-92d2-0a2b976324db-kube-api-access-r6fcg\") pod \"nova-scheduler-0\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.309588 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6fcg\" (UniqueName: \"kubernetes.io/projected/033973ad-b5ce-4136-92d2-0a2b976324db-kube-api-access-r6fcg\") pod \"nova-scheduler-0\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.309654 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.309774 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-config-data\") pod \"nova-scheduler-0\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.317477 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-config-data\") pod \"nova-scheduler-0\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.317499 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.331627 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6fcg\" (UniqueName: \"kubernetes.io/projected/033973ad-b5ce-4136-92d2-0a2b976324db-kube-api-access-r6fcg\") pod \"nova-scheduler-0\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.417866 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.851695 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3cc7b05-b609-4653-bf64-051aa3e11519" path="/var/lib/kubelet/pods/c3cc7b05-b609-4653-bf64-051aa3e11519/volumes" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.956735 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:15:11 crc kubenswrapper[5034]: I0105 22:15:11.964380 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.071422 5034 generic.go:334] "Generic (PLEG): container finished" podID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerID="47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c" exitCode=0 Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.071552 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"356230cc-9dc5-4134-b562-ed2c7bdef752","Type":"ContainerDied","Data":"47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c"} Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.071597 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"356230cc-9dc5-4134-b562-ed2c7bdef752","Type":"ContainerDied","Data":"798af1a1df11e16d10c9e155df45dd2b98a75c00c5aef1e6a17c8594ef9265c5"} Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.071622 5034 scope.go:117] "RemoveContainer" containerID="47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.071927 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.079387 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"033973ad-b5ce-4136-92d2-0a2b976324db","Type":"ContainerStarted","Data":"dd7d703695c764487172ee10f74d7f1730266da2aae12cfb7c22e200c0fdfc28"} Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.111465 5034 scope.go:117] "RemoveContainer" containerID="a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.129295 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxrvg\" (UniqueName: \"kubernetes.io/projected/356230cc-9dc5-4134-b562-ed2c7bdef752-kube-api-access-nxrvg\") pod \"356230cc-9dc5-4134-b562-ed2c7bdef752\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.130120 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/356230cc-9dc5-4134-b562-ed2c7bdef752-logs\") pod \"356230cc-9dc5-4134-b562-ed2c7bdef752\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.130219 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-internal-tls-certs\") pod \"356230cc-9dc5-4134-b562-ed2c7bdef752\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.130278 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-config-data\") pod \"356230cc-9dc5-4134-b562-ed2c7bdef752\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.130367 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-public-tls-certs\") pod \"356230cc-9dc5-4134-b562-ed2c7bdef752\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.130510 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-combined-ca-bundle\") pod \"356230cc-9dc5-4134-b562-ed2c7bdef752\" (UID: \"356230cc-9dc5-4134-b562-ed2c7bdef752\") " Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.130951 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356230cc-9dc5-4134-b562-ed2c7bdef752-logs" (OuterVolumeSpecName: "logs") pod "356230cc-9dc5-4134-b562-ed2c7bdef752" (UID: "356230cc-9dc5-4134-b562-ed2c7bdef752"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.131862 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/356230cc-9dc5-4134-b562-ed2c7bdef752-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.135680 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/356230cc-9dc5-4134-b562-ed2c7bdef752-kube-api-access-nxrvg" (OuterVolumeSpecName: "kube-api-access-nxrvg") pod "356230cc-9dc5-4134-b562-ed2c7bdef752" (UID: "356230cc-9dc5-4134-b562-ed2c7bdef752"). InnerVolumeSpecName "kube-api-access-nxrvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.140363 5034 scope.go:117] "RemoveContainer" containerID="47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c" Jan 05 22:15:12 crc kubenswrapper[5034]: E0105 22:15:12.140970 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c\": container with ID starting with 47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c not found: ID does not exist" containerID="47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.141019 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c"} err="failed to get container status \"47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c\": rpc error: code = NotFound desc = could not find container \"47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c\": container with ID starting with 47e5affeaccfc96c7e457982379877b44139874ea0afc3759094be1d7f7eeb7c not found: ID does not exist" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.141046 5034 scope.go:117] "RemoveContainer" containerID="a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68" Jan 05 22:15:12 crc kubenswrapper[5034]: E0105 22:15:12.141505 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68\": container with ID starting with a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68 not found: ID does not exist" containerID="a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.141534 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68"} err="failed to get container status \"a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68\": rpc error: code = NotFound desc = could not find container \"a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68\": container with ID starting with a09dbc4e2446b57c7ce2d01ac6f84938ad274f19013caf960d9502607b11cb68 not found: ID does not exist" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.165448 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-config-data" (OuterVolumeSpecName: "config-data") pod "356230cc-9dc5-4134-b562-ed2c7bdef752" (UID: "356230cc-9dc5-4134-b562-ed2c7bdef752"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.168654 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "356230cc-9dc5-4134-b562-ed2c7bdef752" (UID: "356230cc-9dc5-4134-b562-ed2c7bdef752"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.200007 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "356230cc-9dc5-4134-b562-ed2c7bdef752" (UID: "356230cc-9dc5-4134-b562-ed2c7bdef752"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.200441 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "356230cc-9dc5-4134-b562-ed2c7bdef752" (UID: "356230cc-9dc5-4134-b562-ed2c7bdef752"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.234455 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.234492 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.234501 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.234509 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356230cc-9dc5-4134-b562-ed2c7bdef752-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.234520 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxrvg\" (UniqueName: \"kubernetes.io/projected/356230cc-9dc5-4134-b562-ed2c7bdef752-kube-api-access-nxrvg\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.414035 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.432870 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.443717 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 22:15:12 crc kubenswrapper[5034]: E0105 22:15:12.444180 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerName="nova-api-log" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.444197 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerName="nova-api-log" Jan 05 22:15:12 crc kubenswrapper[5034]: E0105 22:15:12.444220 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerName="nova-api-api" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.444230 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerName="nova-api-api" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.444486 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerName="nova-api-log" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.444524 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" containerName="nova-api-api" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.445837 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.449484 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.449827 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.452768 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.454773 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.643102 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.643781 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-config-data\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.643961 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-public-tls-certs\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.644096 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.644205 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7lj\" (UniqueName: \"kubernetes.io/projected/eaa3282d-5044-490b-be8e-5b721c49d338-kube-api-access-jc7lj\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.644330 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa3282d-5044-490b-be8e-5b721c49d338-logs\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.746622 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.747189 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-config-data\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.747962 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-public-tls-certs\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.748175 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.748645 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc7lj\" (UniqueName: \"kubernetes.io/projected/eaa3282d-5044-490b-be8e-5b721c49d338-kube-api-access-jc7lj\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.748765 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa3282d-5044-490b-be8e-5b721c49d338-logs\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.752071 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.752626 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-config-data\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.752629 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.752708 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa3282d-5044-490b-be8e-5b721c49d338-logs\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.753957 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-public-tls-certs\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.765523 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc7lj\" (UniqueName: \"kubernetes.io/projected/eaa3282d-5044-490b-be8e-5b721c49d338-kube-api-access-jc7lj\") pod \"nova-api-0\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " pod="openstack/nova-api-0" Jan 05 22:15:12 crc kubenswrapper[5034]: I0105 22:15:12.808468 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.091658 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"033973ad-b5ce-4136-92d2-0a2b976324db","Type":"ContainerStarted","Data":"d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c"} Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.118133 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.118108248 podStartE2EDuration="2.118108248s" podCreationTimestamp="2026-01-05 22:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:15:13.112102908 +0000 UTC m=+1405.484102347" watchObservedRunningTime="2026-01-05 22:15:13.118108248 +0000 UTC m=+1405.490107687" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.155721 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:60836->10.217.0.196:8775: read: connection reset by peer" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.155979 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:60834->10.217.0.196:8775: read: connection reset by peer" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.306347 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:15:13 crc kubenswrapper[5034]: W0105 22:15:13.308477 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa3282d_5044_490b_be8e_5b721c49d338.slice/crio-237e67627db05d97c5134eb5b774b3cb669a3a4dde82b6b8687f0fb29b15fa8b WatchSource:0}: Error finding container 237e67627db05d97c5134eb5b774b3cb669a3a4dde82b6b8687f0fb29b15fa8b: Status 404 returned error can't find the container with id 237e67627db05d97c5134eb5b774b3cb669a3a4dde82b6b8687f0fb29b15fa8b Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.616512 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.675916 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-combined-ca-bundle\") pod \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.676478 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq88k\" (UniqueName: \"kubernetes.io/projected/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-kube-api-access-sq88k\") pod \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.676584 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-logs\") pod \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.676603 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-nova-metadata-tls-certs\") pod \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.676882 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-config-data\") pod \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\" (UID: \"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81\") " Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.677450 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-logs" (OuterVolumeSpecName: "logs") pod "3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" (UID: "3116a8e4-6a9b-4ab8-b9e9-d003f3abef81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.681336 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-kube-api-access-sq88k" (OuterVolumeSpecName: "kube-api-access-sq88k") pod "3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" (UID: "3116a8e4-6a9b-4ab8-b9e9-d003f3abef81"). InnerVolumeSpecName "kube-api-access-sq88k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.745273 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" (UID: "3116a8e4-6a9b-4ab8-b9e9-d003f3abef81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.752045 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-config-data" (OuterVolumeSpecName: "config-data") pod "3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" (UID: "3116a8e4-6a9b-4ab8-b9e9-d003f3abef81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.755988 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" (UID: "3116a8e4-6a9b-4ab8-b9e9-d003f3abef81"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.779293 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq88k\" (UniqueName: \"kubernetes.io/projected/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-kube-api-access-sq88k\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.779337 5034 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.779353 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.779362 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.779376 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:13 crc kubenswrapper[5034]: I0105 22:15:13.850694 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="356230cc-9dc5-4134-b562-ed2c7bdef752" path="/var/lib/kubelet/pods/356230cc-9dc5-4134-b562-ed2c7bdef752/volumes" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.107717 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eaa3282d-5044-490b-be8e-5b721c49d338","Type":"ContainerStarted","Data":"e71299f8473ea6e97ab3f521671935fa9ae99d0a935b71e63ce2ceb108169b56"} Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.107801 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eaa3282d-5044-490b-be8e-5b721c49d338","Type":"ContainerStarted","Data":"60e0dc06f5d11e4ea971c7f7cf856032cf0326af71dd5f35ab34721c3f181e11"} Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.107814 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eaa3282d-5044-490b-be8e-5b721c49d338","Type":"ContainerStarted","Data":"237e67627db05d97c5134eb5b774b3cb669a3a4dde82b6b8687f0fb29b15fa8b"} Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.115189 5034 generic.go:334] "Generic (PLEG): container finished" podID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerID="1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a" exitCode=0 Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.115505 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.116483 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81","Type":"ContainerDied","Data":"1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a"} Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.116541 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3116a8e4-6a9b-4ab8-b9e9-d003f3abef81","Type":"ContainerDied","Data":"9d53307977547a7e00e04f07bdadcc4caea7c9b6ab966f5df471baa96cfe873d"} Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.116567 5034 scope.go:117] "RemoveContainer" containerID="1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.144744 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.144708746 podStartE2EDuration="2.144708746s" podCreationTimestamp="2026-01-05 22:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:15:14.136549444 +0000 UTC m=+1406.508548883" watchObservedRunningTime="2026-01-05 22:15:14.144708746 +0000 UTC m=+1406.516708185" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.154342 5034 scope.go:117] "RemoveContainer" containerID="8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.173724 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.181980 5034 scope.go:117] "RemoveContainer" containerID="1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a" Jan 05 22:15:14 crc kubenswrapper[5034]: E0105 22:15:14.184050 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a\": container with ID starting with 1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a not found: ID does not exist" containerID="1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.184185 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a"} err="failed to get container status \"1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a\": rpc error: code = NotFound desc = could not find container \"1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a\": container with ID starting with 1c1ce593fea6a47145ccdb580b5c5dcba92560f9dd37b3375c00446b01cd8f9a not found: ID does not exist" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.184221 5034 scope.go:117] "RemoveContainer" containerID="8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d" Jan 05 22:15:14 crc kubenswrapper[5034]: E0105 22:15:14.184708 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d\": container with ID starting with 8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d not found: ID does not exist" containerID="8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.184738 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d"} err="failed to get container status \"8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d\": rpc error: code = NotFound desc = could not find container \"8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d\": container with ID starting with 8320bdd8cf6289f2e568a80417181aaf797d47cbb802437e9e7855c6c4f33c6d not found: ID does not exist" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.200140 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.213502 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:15:14 crc kubenswrapper[5034]: E0105 22:15:14.214325 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-log" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.214348 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-log" Jan 05 22:15:14 crc kubenswrapper[5034]: E0105 22:15:14.214391 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-metadata" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.214400 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-metadata" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.214628 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-log" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.214681 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" containerName="nova-metadata-metadata" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.216416 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.232700 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.232736 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.241489 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.395969 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-config-data\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.396350 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.396634 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11669bb7-2e25-4817-a4e8-a487ea5b90cb-logs\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.396767 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnwlp\" (UniqueName: \"kubernetes.io/projected/11669bb7-2e25-4817-a4e8-a487ea5b90cb-kube-api-access-vnwlp\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.396906 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.502307 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.502474 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-config-data\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.502521 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.502571 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11669bb7-2e25-4817-a4e8-a487ea5b90cb-logs\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.502628 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnwlp\" (UniqueName: \"kubernetes.io/projected/11669bb7-2e25-4817-a4e8-a487ea5b90cb-kube-api-access-vnwlp\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.504118 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11669bb7-2e25-4817-a4e8-a487ea5b90cb-logs\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.520319 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.526512 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.536020 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-config-data\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.544225 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnwlp\" (UniqueName: \"kubernetes.io/projected/11669bb7-2e25-4817-a4e8-a487ea5b90cb-kube-api-access-vnwlp\") pod \"nova-metadata-0\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " pod="openstack/nova-metadata-0" Jan 05 22:15:14 crc kubenswrapper[5034]: I0105 22:15:14.553780 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:15:15 crc kubenswrapper[5034]: W0105 22:15:15.695534 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11669bb7_2e25_4817_a4e8_a487ea5b90cb.slice/crio-6b5f42e9b038055bad9d0746b396d80efc4e409a77f82361bd85234c134e6520 WatchSource:0}: Error finding container 6b5f42e9b038055bad9d0746b396d80efc4e409a77f82361bd85234c134e6520: Status 404 returned error can't find the container with id 6b5f42e9b038055bad9d0746b396d80efc4e409a77f82361bd85234c134e6520 Jan 05 22:15:15 crc kubenswrapper[5034]: I0105 22:15:15.701215 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:15:15 crc kubenswrapper[5034]: I0105 22:15:15.851897 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3116a8e4-6a9b-4ab8-b9e9-d003f3abef81" path="/var/lib/kubelet/pods/3116a8e4-6a9b-4ab8-b9e9-d003f3abef81/volumes" Jan 05 22:15:16 crc kubenswrapper[5034]: I0105 22:15:16.137980 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11669bb7-2e25-4817-a4e8-a487ea5b90cb","Type":"ContainerStarted","Data":"3291212e6637ec18d4d97f490e2b13376227e40062c2978ce122475ac1ccee20"} Jan 05 22:15:16 crc kubenswrapper[5034]: I0105 22:15:16.138027 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11669bb7-2e25-4817-a4e8-a487ea5b90cb","Type":"ContainerStarted","Data":"6b5f42e9b038055bad9d0746b396d80efc4e409a77f82361bd85234c134e6520"} Jan 05 22:15:16 crc kubenswrapper[5034]: I0105 22:15:16.418801 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 22:15:17 crc kubenswrapper[5034]: I0105 22:15:17.151391 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11669bb7-2e25-4817-a4e8-a487ea5b90cb","Type":"ContainerStarted","Data":"1b21cd994fe20b3b5d50504e11ca0a8e875e6198715a73ad6b85c6ab13ae3f03"} Jan 05 22:15:17 crc kubenswrapper[5034]: I0105 22:15:17.182498 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.182477965 podStartE2EDuration="3.182477965s" podCreationTimestamp="2026-01-05 22:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:15:17.178633856 +0000 UTC m=+1409.550633315" watchObservedRunningTime="2026-01-05 22:15:17.182477965 +0000 UTC m=+1409.554477394" Jan 05 22:15:19 crc kubenswrapper[5034]: I0105 22:15:19.554115 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 22:15:19 crc kubenswrapper[5034]: I0105 22:15:19.554720 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 22:15:21 crc kubenswrapper[5034]: I0105 22:15:21.418796 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 05 22:15:21 crc kubenswrapper[5034]: I0105 22:15:21.449442 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 05 22:15:22 crc kubenswrapper[5034]: I0105 22:15:22.239657 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 05 22:15:22 crc kubenswrapper[5034]: I0105 22:15:22.808727 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 22:15:22 crc kubenswrapper[5034]: I0105 22:15:22.808789 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 22:15:23 crc kubenswrapper[5034]: I0105 22:15:23.823330 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 22:15:23 crc kubenswrapper[5034]: I0105 22:15:23.823373 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 22:15:24 crc kubenswrapper[5034]: I0105 22:15:24.119073 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 05 22:15:24 crc kubenswrapper[5034]: I0105 22:15:24.554202 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 22:15:24 crc kubenswrapper[5034]: I0105 22:15:24.555588 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 22:15:25 crc kubenswrapper[5034]: I0105 22:15:25.568230 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 22:15:25 crc kubenswrapper[5034]: I0105 22:15:25.568230 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.272225 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zzzql"] Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.274620 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.285621 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzzql"] Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.402897 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-catalog-content\") pod \"certified-operators-zzzql\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.403002 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpwng\" (UniqueName: \"kubernetes.io/projected/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-kube-api-access-gpwng\") pod \"certified-operators-zzzql\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.403506 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-utilities\") pod \"certified-operators-zzzql\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.506564 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-catalog-content\") pod \"certified-operators-zzzql\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.506647 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpwng\" (UniqueName: \"kubernetes.io/projected/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-kube-api-access-gpwng\") pod \"certified-operators-zzzql\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.506732 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-utilities\") pod \"certified-operators-zzzql\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.507360 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-utilities\") pod \"certified-operators-zzzql\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.507360 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-catalog-content\") pod \"certified-operators-zzzql\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.525880 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpwng\" (UniqueName: \"kubernetes.io/projected/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-kube-api-access-gpwng\") pod \"certified-operators-zzzql\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:31 crc kubenswrapper[5034]: I0105 22:15:31.603575 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:32 crc kubenswrapper[5034]: I0105 22:15:32.119834 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzzql"] Jan 05 22:15:32 crc kubenswrapper[5034]: W0105 22:15:32.129435 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2ce4e75_44ba_46f2_bbbd_d550d943d5df.slice/crio-211397c084cac81f3eca3e54f90994f88157eca1d1cd6c87c8161a415840fa4e WatchSource:0}: Error finding container 211397c084cac81f3eca3e54f90994f88157eca1d1cd6c87c8161a415840fa4e: Status 404 returned error can't find the container with id 211397c084cac81f3eca3e54f90994f88157eca1d1cd6c87c8161a415840fa4e Jan 05 22:15:32 crc kubenswrapper[5034]: I0105 22:15:32.304054 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzzql" event={"ID":"a2ce4e75-44ba-46f2-bbbd-d550d943d5df","Type":"ContainerStarted","Data":"211397c084cac81f3eca3e54f90994f88157eca1d1cd6c87c8161a415840fa4e"} Jan 05 22:15:32 crc kubenswrapper[5034]: I0105 22:15:32.830592 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 22:15:32 crc kubenswrapper[5034]: I0105 22:15:32.832515 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 22:15:32 crc kubenswrapper[5034]: I0105 22:15:32.840612 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 22:15:32 crc kubenswrapper[5034]: I0105 22:15:32.843969 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 22:15:33 crc kubenswrapper[5034]: I0105 22:15:33.319534 5034 generic.go:334] "Generic (PLEG): container finished" podID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerID="291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449" exitCode=0 Jan 05 22:15:33 crc kubenswrapper[5034]: I0105 22:15:33.319630 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzzql" event={"ID":"a2ce4e75-44ba-46f2-bbbd-d550d943d5df","Type":"ContainerDied","Data":"291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449"} Jan 05 22:15:33 crc kubenswrapper[5034]: I0105 22:15:33.320347 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 22:15:33 crc kubenswrapper[5034]: I0105 22:15:33.322143 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:15:33 crc kubenswrapper[5034]: I0105 22:15:33.331371 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 22:15:34 crc kubenswrapper[5034]: I0105 22:15:34.561435 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 22:15:34 crc kubenswrapper[5034]: I0105 22:15:34.562143 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 22:15:34 crc kubenswrapper[5034]: I0105 22:15:34.571181 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 22:15:34 crc kubenswrapper[5034]: I0105 22:15:34.571267 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 22:15:35 crc kubenswrapper[5034]: I0105 22:15:35.346892 5034 generic.go:334] "Generic (PLEG): container finished" podID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerID="101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390" exitCode=0 Jan 05 22:15:35 crc kubenswrapper[5034]: I0105 22:15:35.346999 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzzql" event={"ID":"a2ce4e75-44ba-46f2-bbbd-d550d943d5df","Type":"ContainerDied","Data":"101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390"} Jan 05 22:15:36 crc kubenswrapper[5034]: I0105 22:15:36.358531 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzzql" event={"ID":"a2ce4e75-44ba-46f2-bbbd-d550d943d5df","Type":"ContainerStarted","Data":"399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976"} Jan 05 22:15:36 crc kubenswrapper[5034]: I0105 22:15:36.378632 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zzzql" podStartSLOduration=2.7221306480000003 podStartE2EDuration="5.378611343s" podCreationTimestamp="2026-01-05 22:15:31 +0000 UTC" firstStartedPulling="2026-01-05 22:15:33.321827234 +0000 UTC m=+1425.693826673" lastFinishedPulling="2026-01-05 22:15:35.978307919 +0000 UTC m=+1428.350307368" observedRunningTime="2026-01-05 22:15:36.376438841 +0000 UTC m=+1428.748438290" watchObservedRunningTime="2026-01-05 22:15:36.378611343 +0000 UTC m=+1428.750610782" Jan 05 22:15:41 crc kubenswrapper[5034]: I0105 22:15:41.604074 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:41 crc kubenswrapper[5034]: I0105 22:15:41.604928 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:41 crc kubenswrapper[5034]: I0105 22:15:41.650105 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:42 crc kubenswrapper[5034]: I0105 22:15:42.487266 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:42 crc kubenswrapper[5034]: I0105 22:15:42.575646 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zzzql"] Jan 05 22:15:44 crc kubenswrapper[5034]: I0105 22:15:44.453354 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zzzql" podUID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerName="registry-server" containerID="cri-o://399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976" gracePeriod=2 Jan 05 22:15:44 crc kubenswrapper[5034]: I0105 22:15:44.924267 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.119045 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-catalog-content\") pod \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.119445 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpwng\" (UniqueName: \"kubernetes.io/projected/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-kube-api-access-gpwng\") pod \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.119562 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-utilities\") pod \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\" (UID: \"a2ce4e75-44ba-46f2-bbbd-d550d943d5df\") " Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.120334 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-utilities" (OuterVolumeSpecName: "utilities") pod "a2ce4e75-44ba-46f2-bbbd-d550d943d5df" (UID: "a2ce4e75-44ba-46f2-bbbd-d550d943d5df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.127610 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-kube-api-access-gpwng" (OuterVolumeSpecName: "kube-api-access-gpwng") pod "a2ce4e75-44ba-46f2-bbbd-d550d943d5df" (UID: "a2ce4e75-44ba-46f2-bbbd-d550d943d5df"). InnerVolumeSpecName "kube-api-access-gpwng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.182163 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2ce4e75-44ba-46f2-bbbd-d550d943d5df" (UID: "a2ce4e75-44ba-46f2-bbbd-d550d943d5df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.221509 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpwng\" (UniqueName: \"kubernetes.io/projected/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-kube-api-access-gpwng\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.221547 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.221564 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2ce4e75-44ba-46f2-bbbd-d550d943d5df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.466694 5034 generic.go:334] "Generic (PLEG): container finished" podID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerID="399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976" exitCode=0 Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.466756 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzzql" event={"ID":"a2ce4e75-44ba-46f2-bbbd-d550d943d5df","Type":"ContainerDied","Data":"399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976"} Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.466769 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzzql" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.466793 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzzql" event={"ID":"a2ce4e75-44ba-46f2-bbbd-d550d943d5df","Type":"ContainerDied","Data":"211397c084cac81f3eca3e54f90994f88157eca1d1cd6c87c8161a415840fa4e"} Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.466818 5034 scope.go:117] "RemoveContainer" containerID="399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.497695 5034 scope.go:117] "RemoveContainer" containerID="101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.502722 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zzzql"] Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.512161 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zzzql"] Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.531746 5034 scope.go:117] "RemoveContainer" containerID="291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.584691 5034 scope.go:117] "RemoveContainer" containerID="399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976" Jan 05 22:15:45 crc kubenswrapper[5034]: E0105 22:15:45.585407 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976\": container with ID starting with 399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976 not found: ID does not exist" containerID="399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.585449 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976"} err="failed to get container status \"399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976\": rpc error: code = NotFound desc = could not find container \"399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976\": container with ID starting with 399588f1317f97b473f2f6ebebf9b18512af2dba7282321950a6db283872e976 not found: ID does not exist" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.585476 5034 scope.go:117] "RemoveContainer" containerID="101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390" Jan 05 22:15:45 crc kubenswrapper[5034]: E0105 22:15:45.586356 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390\": container with ID starting with 101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390 not found: ID does not exist" containerID="101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.586376 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390"} err="failed to get container status \"101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390\": rpc error: code = NotFound desc = could not find container \"101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390\": container with ID starting with 101476ff87866de6395da03b0115bac574fd81dfa8aec085a7e8a31eeea3b390 not found: ID does not exist" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.586390 5034 scope.go:117] "RemoveContainer" containerID="291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449" Jan 05 22:15:45 crc kubenswrapper[5034]: E0105 22:15:45.586786 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449\": container with ID starting with 291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449 not found: ID does not exist" containerID="291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.586807 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449"} err="failed to get container status \"291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449\": rpc error: code = NotFound desc = could not find container \"291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449\": container with ID starting with 291d640cd57b25ddeed8e7049a6f4de4f1c59adc837b5b9eeda6b3e01073a449 not found: ID does not exist" Jan 05 22:15:45 crc kubenswrapper[5034]: I0105 22:15:45.855165 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" path="/var/lib/kubelet/pods/a2ce4e75-44ba-46f2-bbbd-d550d943d5df/volumes" Jan 05 22:15:50 crc kubenswrapper[5034]: I0105 22:15:50.468917 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:15:50 crc kubenswrapper[5034]: I0105 22:15:50.469273 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:15:52 crc kubenswrapper[5034]: I0105 22:15:52.994734 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 05 22:15:52 crc kubenswrapper[5034]: I0105 22:15:52.995465 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="a63e68c4-06b7-4513-ac92-6415cbd75e88" containerName="openstackclient" containerID="cri-o://da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419" gracePeriod=2 Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.017803 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jbklw"] Jan 05 22:15:53 crc kubenswrapper[5034]: E0105 22:15:53.018964 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerName="extract-content" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.018992 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerName="extract-content" Jan 05 22:15:53 crc kubenswrapper[5034]: E0105 22:15:53.019008 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerName="extract-utilities" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.019020 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerName="extract-utilities" Jan 05 22:15:53 crc kubenswrapper[5034]: E0105 22:15:53.019069 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerName="registry-server" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.019095 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerName="registry-server" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.019440 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ce4e75-44ba-46f2-bbbd-d550d943d5df" containerName="registry-server" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.020607 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jbklw" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.028851 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.035190 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.051314 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jbklw"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.111483 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hkgrk"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.143749 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-operator-scripts\") pod \"root-account-create-update-jbklw\" (UID: \"0623db6b-2e6a-4739-8c7f-ec9a98b51d93\") " pod="openstack/root-account-create-update-jbklw" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.143799 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tfd6\" (UniqueName: \"kubernetes.io/projected/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-kube-api-access-7tfd6\") pod \"root-account-create-update-jbklw\" (UID: \"0623db6b-2e6a-4739-8c7f-ec9a98b51d93\") " pod="openstack/root-account-create-update-jbklw" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.149405 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hkgrk"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.238211 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.252009 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-operator-scripts\") pod \"root-account-create-update-jbklw\" (UID: \"0623db6b-2e6a-4739-8c7f-ec9a98b51d93\") " pod="openstack/root-account-create-update-jbklw" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.252364 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tfd6\" (UniqueName: \"kubernetes.io/projected/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-kube-api-access-7tfd6\") pod \"root-account-create-update-jbklw\" (UID: \"0623db6b-2e6a-4739-8c7f-ec9a98b51d93\") " pod="openstack/root-account-create-update-jbklw" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.253898 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-operator-scripts\") pod \"root-account-create-update-jbklw\" (UID: \"0623db6b-2e6a-4739-8c7f-ec9a98b51d93\") " pod="openstack/root-account-create-update-jbklw" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.294600 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-v4mvr"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.309409 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-xzj87"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.309806 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-xzj87" podUID="9317f553-2101-4507-8f08-52e23105b5c1" containerName="openstack-network-exporter" containerID="cri-o://b97d5cd29ffddf99657a5c2482efc985154a68be845fbf75fb23b805c3a393b7" gracePeriod=30 Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.323149 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4gbcl"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.340350 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-2cvvx"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.368766 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tfd6\" (UniqueName: \"kubernetes.io/projected/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-kube-api-access-7tfd6\") pod \"root-account-create-update-jbklw\" (UID: \"0623db6b-2e6a-4739-8c7f-ec9a98b51d93\") " pod="openstack/root-account-create-update-jbklw" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.378112 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jbklw" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.421860 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-2cvvx"] Jan 05 22:15:53 crc kubenswrapper[5034]: E0105 22:15:53.465286 5034 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 05 22:15:53 crc kubenswrapper[5034]: E0105 22:15:53.465394 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data podName:94526d3f-1e21-4eef-abb7-5cd05bfb1670 nodeName:}" failed. No retries permitted until 2026-01-05 22:15:53.965366421 +0000 UTC m=+1446.337365860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data") pod "rabbitmq-server-0" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670") : configmap "rabbitmq-config-data" not found Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.502880 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-pp9cm"] Jan 05 22:15:53 crc kubenswrapper[5034]: E0105 22:15:53.503787 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63e68c4-06b7-4513-ac92-6415cbd75e88" containerName="openstackclient" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.503805 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63e68c4-06b7-4513-ac92-6415cbd75e88" containerName="openstackclient" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.504051 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a63e68c4-06b7-4513-ac92-6415cbd75e88" containerName="openstackclient" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.504895 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.535287 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.548365 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-pp9cm"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.569281 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/352134f8-9e6a-487a-8afd-b70ab941cd17-operator-scripts\") pod \"nova-cell0-dbbc-account-create-update-pp9cm\" (UID: \"352134f8-9e6a-487a-8afd-b70ab941cd17\") " pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.569448 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xmsj\" (UniqueName: \"kubernetes.io/projected/352134f8-9e6a-487a-8afd-b70ab941cd17-kube-api-access-7xmsj\") pod \"nova-cell0-dbbc-account-create-update-pp9cm\" (UID: \"352134f8-9e6a-487a-8afd-b70ab941cd17\") " pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.594355 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xzj87_9317f553-2101-4507-8f08-52e23105b5c1/openstack-network-exporter/0.log" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.594414 5034 generic.go:334] "Generic (PLEG): container finished" podID="9317f553-2101-4507-8f08-52e23105b5c1" containerID="b97d5cd29ffddf99657a5c2482efc985154a68be845fbf75fb23b805c3a393b7" exitCode=2 Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.594456 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xzj87" event={"ID":"9317f553-2101-4507-8f08-52e23105b5c1","Type":"ContainerDied","Data":"b97d5cd29ffddf99657a5c2482efc985154a68be845fbf75fb23b805c3a393b7"} Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.614430 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8420-account-create-update-qgcgf"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.615979 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8420-account-create-update-qgcgf" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.641049 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.644881 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8420-account-create-update-kv5xq"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.666926 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8420-account-create-update-kv5xq"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.673489 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xmsj\" (UniqueName: \"kubernetes.io/projected/352134f8-9e6a-487a-8afd-b70ab941cd17-kube-api-access-7xmsj\") pod \"nova-cell0-dbbc-account-create-update-pp9cm\" (UID: \"352134f8-9e6a-487a-8afd-b70ab941cd17\") " pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.673628 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9x6s\" (UniqueName: \"kubernetes.io/projected/8d62b8ca-f71a-424f-bbee-cc709c382ba9-kube-api-access-s9x6s\") pod \"placement-8420-account-create-update-qgcgf\" (UID: \"8d62b8ca-f71a-424f-bbee-cc709c382ba9\") " pod="openstack/placement-8420-account-create-update-qgcgf" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.673681 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/352134f8-9e6a-487a-8afd-b70ab941cd17-operator-scripts\") pod \"nova-cell0-dbbc-account-create-update-pp9cm\" (UID: \"352134f8-9e6a-487a-8afd-b70ab941cd17\") " pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.673802 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d62b8ca-f71a-424f-bbee-cc709c382ba9-operator-scripts\") pod \"placement-8420-account-create-update-qgcgf\" (UID: \"8d62b8ca-f71a-424f-bbee-cc709c382ba9\") " pod="openstack/placement-8420-account-create-update-qgcgf" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.674690 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8420-account-create-update-qgcgf"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.675210 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/352134f8-9e6a-487a-8afd-b70ab941cd17-operator-scripts\") pod \"nova-cell0-dbbc-account-create-update-pp9cm\" (UID: \"352134f8-9e6a-487a-8afd-b70ab941cd17\") " pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.730060 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c4ab-account-create-update-gmflk"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.744965 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xmsj\" (UniqueName: \"kubernetes.io/projected/352134f8-9e6a-487a-8afd-b70ab941cd17-kube-api-access-7xmsj\") pod \"nova-cell0-dbbc-account-create-update-pp9cm\" (UID: \"352134f8-9e6a-487a-8afd-b70ab941cd17\") " pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.766247 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c4ab-account-create-update-gmflk"] Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.775734 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9x6s\" (UniqueName: \"kubernetes.io/projected/8d62b8ca-f71a-424f-bbee-cc709c382ba9-kube-api-access-s9x6s\") pod \"placement-8420-account-create-update-qgcgf\" (UID: \"8d62b8ca-f71a-424f-bbee-cc709c382ba9\") " pod="openstack/placement-8420-account-create-update-qgcgf" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.775852 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d62b8ca-f71a-424f-bbee-cc709c382ba9-operator-scripts\") pod \"placement-8420-account-create-update-qgcgf\" (UID: \"8d62b8ca-f71a-424f-bbee-cc709c382ba9\") " pod="openstack/placement-8420-account-create-update-qgcgf" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.776629 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d62b8ca-f71a-424f-bbee-cc709c382ba9-operator-scripts\") pod \"placement-8420-account-create-update-qgcgf\" (UID: \"8d62b8ca-f71a-424f-bbee-cc709c382ba9\") " pod="openstack/placement-8420-account-create-update-qgcgf" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.792490 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerName="galera" probeResult="failure" output="command timed out" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.794945 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerName="galera" probeResult="failure" output="command timed out" Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.918023 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9x6s\" (UniqueName: \"kubernetes.io/projected/8d62b8ca-f71a-424f-bbee-cc709c382ba9-kube-api-access-s9x6s\") pod \"placement-8420-account-create-update-qgcgf\" (UID: \"8d62b8ca-f71a-424f-bbee-cc709c382ba9\") " pod="openstack/placement-8420-account-create-update-qgcgf" Jan 05 22:15:53 crc kubenswrapper[5034]: E0105 22:15:53.981492 5034 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 05 22:15:53 crc kubenswrapper[5034]: I0105 22:15:53.985157 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:53.996698 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b6ce724-2b29-4249-ac22-c95de9c2bb14" path="/var/lib/kubelet/pods/1b6ce724-2b29-4249-ac22-c95de9c2bb14/volumes" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:53.997774 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8420-account-create-update-qgcgf" Jan 05 22:15:54 crc kubenswrapper[5034]: E0105 22:15:54.000802 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data podName:94526d3f-1e21-4eef-abb7-5cd05bfb1670 nodeName:}" failed. No retries permitted until 2026-01-05 22:15:55.000764796 +0000 UTC m=+1447.372764245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data") pod "rabbitmq-server-0" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670") : configmap "rabbitmq-config-data" not found Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.007531 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31cf2934-c66e-40ca-81f5-26c0efff8bd4" path="/var/lib/kubelet/pods/31cf2934-c66e-40ca-81f5-26c0efff8bd4/volumes" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.019735 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1ad3d67-1a07-4021-ac14-1f7660deedb9" path="/var/lib/kubelet/pods/e1ad3d67-1a07-4021-ac14-1f7660deedb9/volumes" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.026692 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8e640fc-0cfe-430b-9b7a-90c5d68e6b76" path="/var/lib/kubelet/pods/e8e640fc-0cfe-430b-9b7a-90c5d68e6b76/volumes" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.029029 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.162289 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.190346 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" containerName="ovn-northd" containerID="cri-o://0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.191436 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" containerName="openstack-network-exporter" containerID="cri-o://cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: E0105 22:15:54.224893 5034 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 05 22:15:54 crc kubenswrapper[5034]: E0105 22:15:54.224951 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data podName:65a6b236-e04b-494a-a18e-5d1a8a5ae02a nodeName:}" failed. No retries permitted until 2026-01-05 22:15:54.724934424 +0000 UTC m=+1447.096933863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data") pod "rabbitmq-cell1-server-0" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a") : configmap "rabbitmq-cell1-config-data" not found Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.237344 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e1e8-account-create-update-z55c9"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.262305 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-64l6z"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.314837 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e1e8-account-create-update-z55c9"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.346105 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-64l6z"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.434928 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ea2b-account-create-update-6jx2r"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.463539 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ea2b-account-create-update-6jx2r"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.504248 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-sh52t"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.609422 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-htrf9"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.657777 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-htrf9"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.723536 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-8wjbq"] Jan 05 22:15:54 crc kubenswrapper[5034]: E0105 22:15:54.743194 5034 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 05 22:15:54 crc kubenswrapper[5034]: E0105 22:15:54.756375 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data podName:65a6b236-e04b-494a-a18e-5d1a8a5ae02a nodeName:}" failed. No retries permitted until 2026-01-05 22:15:55.756318346 +0000 UTC m=+1448.128317785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data") pod "rabbitmq-cell1-server-0" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a") : configmap "rabbitmq-cell1-config-data" not found Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.766508 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2188-account-create-update-r8nqh"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.832479 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-sh52t"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.862942 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xzj87_9317f553-2101-4507-8f08-52e23105b5c1/openstack-network-exporter/0.log" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.863072 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.883467 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2188-account-create-update-r8nqh"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.935161 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-8wjbq"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.965276 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6s2n\" (UniqueName: \"kubernetes.io/projected/9317f553-2101-4507-8f08-52e23105b5c1-kube-api-access-c6s2n\") pod \"9317f553-2101-4507-8f08-52e23105b5c1\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.965821 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovs-rundir\") pod \"9317f553-2101-4507-8f08-52e23105b5c1\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.965857 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-metrics-certs-tls-certs\") pod \"9317f553-2101-4507-8f08-52e23105b5c1\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.965987 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-combined-ca-bundle\") pod \"9317f553-2101-4507-8f08-52e23105b5c1\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.966139 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovn-rundir\") pod \"9317f553-2101-4507-8f08-52e23105b5c1\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.966174 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9317f553-2101-4507-8f08-52e23105b5c1-config\") pod \"9317f553-2101-4507-8f08-52e23105b5c1\" (UID: \"9317f553-2101-4507-8f08-52e23105b5c1\") " Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.973459 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "9317f553-2101-4507-8f08-52e23105b5c1" (UID: "9317f553-2101-4507-8f08-52e23105b5c1"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.973530 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "9317f553-2101-4507-8f08-52e23105b5c1" (UID: "9317f553-2101-4507-8f08-52e23105b5c1"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.974373 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9317f553-2101-4507-8f08-52e23105b5c1-config" (OuterVolumeSpecName: "config") pod "9317f553-2101-4507-8f08-52e23105b5c1" (UID: "9317f553-2101-4507-8f08-52e23105b5c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.980432 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.981107 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-server" containerID="cri-o://1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.981789 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="swift-recon-cron" containerID="cri-o://9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.981870 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="rsync" containerID="cri-o://c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.981922 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-expirer" containerID="cri-o://0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.981976 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-updater" containerID="cri-o://037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.982028 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-auditor" containerID="cri-o://0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.982094 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-replicator" containerID="cri-o://b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.982142 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-server" containerID="cri-o://832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.982188 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-updater" containerID="cri-o://18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.982241 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-auditor" containerID="cri-o://86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.982283 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-replicator" containerID="cri-o://41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.982328 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-server" containerID="cri-o://4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.982378 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-reaper" containerID="cri-o://f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.982425 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-auditor" containerID="cri-o://56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.982481 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-replicator" containerID="cri-o://94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370" gracePeriod=30 Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.988777 5034 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.988806 5034 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9317f553-2101-4507-8f08-52e23105b5c1-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:54 crc kubenswrapper[5034]: I0105 22:15:54.988818 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9317f553-2101-4507-8f08-52e23105b5c1-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.012301 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9317f553-2101-4507-8f08-52e23105b5c1-kube-api-access-c6s2n" (OuterVolumeSpecName: "kube-api-access-c6s2n") pod "9317f553-2101-4507-8f08-52e23105b5c1" (UID: "9317f553-2101-4507-8f08-52e23105b5c1"). InnerVolumeSpecName "kube-api-access-c6s2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.035618 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-gcdp2"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.035904 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" podUID="92d2026b-e43c-47d5-ad78-e532a664f033" containerName="dnsmasq-dns" containerID="cri-o://20fbb538b44cc958c0860e6e3d037b5579a58d90268eae871a9788dbe68e60d7" gracePeriod=10 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.075476 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9mfg"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.092842 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6s2n\" (UniqueName: \"kubernetes.io/projected/9317f553-2101-4507-8f08-52e23105b5c1-kube-api-access-c6s2n\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:55 crc kubenswrapper[5034]: E0105 22:15:55.092851 5034 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 05 22:15:55 crc kubenswrapper[5034]: E0105 22:15:55.093002 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data podName:94526d3f-1e21-4eef-abb7-5cd05bfb1670 nodeName:}" failed. No retries permitted until 2026-01-05 22:15:57.092956073 +0000 UTC m=+1449.464955502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data") pod "rabbitmq-server-0" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670") : configmap "rabbitmq-config-data" not found Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.105348 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9mfg"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.141542 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ptccl"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.206811 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9317f553-2101-4507-8f08-52e23105b5c1" (UID: "9317f553-2101-4507-8f08-52e23105b5c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.219278 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-dkbzg"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.269698 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.273328 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="d8f99f63-df74-4392-a5fc-bf090571266f" containerName="openstack-network-exporter" containerID="cri-o://f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b" gracePeriod=300 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.311190 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.319579 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ptccl"] Jan 05 22:15:55 crc kubenswrapper[5034]: E0105 22:15:55.334134 5034 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-4gbcl" message=< Jan 05 22:15:55 crc kubenswrapper[5034]: Exiting ovn-controller (1) [ OK ] Jan 05 22:15:55 crc kubenswrapper[5034]: > Jan 05 22:15:55 crc kubenswrapper[5034]: E0105 22:15:55.334200 5034 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-4gbcl" podUID="8174d3dc-0931-484a-850f-3649234ef9fc" containerName="ovn-controller" containerID="cri-o://c5a11337c879be24ab652f59ee08fa45eea5831a7d805b7ba9723c0e7a770afa" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.334338 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-4gbcl" podUID="8174d3dc-0931-484a-850f-3649234ef9fc" containerName="ovn-controller" containerID="cri-o://c5a11337c879be24ab652f59ee08fa45eea5831a7d805b7ba9723c0e7a770afa" gracePeriod=28 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.367069 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-dkbzg"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.414901 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5l8t7"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.453342 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "9317f553-2101-4507-8f08-52e23105b5c1" (UID: "9317f553-2101-4507-8f08-52e23105b5c1"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.468996 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5l8t7"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.471988 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="d8f99f63-df74-4392-a5fc-bf090571266f" containerName="ovsdbserver-nb" containerID="cri-o://2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7" gracePeriod=300 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.515059 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.515739 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="434da13f-30c5-4464-9b48-3d93ec7762d0" containerName="openstack-network-exporter" containerID="cri-o://1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833" gracePeriod=300 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.520140 5034 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9317f553-2101-4507-8f08-52e23105b5c1-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.579155 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-664f75f5b6-lz6hv"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.579543 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-664f75f5b6-lz6hv" podUID="d11dc2db-1f91-4ec6-9efd-333fcafface4" containerName="placement-log" containerID="cri-o://08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.580826 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-664f75f5b6-lz6hv" podUID="d11dc2db-1f91-4ec6-9efd-333fcafface4" containerName="placement-api" containerID="cri-o://0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.619404 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.619761 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" containerName="cinder-scheduler" containerID="cri-o://598559a378fd6ca644d7dbe7962a49bd4a282bb0608ed7a7db6dd7fff095ac06" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.620478 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" containerName="probe" containerID="cri-o://72960a55513e9ecef8e41d52208d6be80b39479282ae6e2f1bc413cfa48dcc2f" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: W0105 22:15:55.664498 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0623db6b_2e6a_4739_8c7f_ec9a98b51d93.slice/crio-3b73b163629fa502e921868df8cd950a0b863245f910c7a9e066da1e9ac99e47 WatchSource:0}: Error finding container 3b73b163629fa502e921868df8cd950a0b863245f910c7a9e066da1e9ac99e47: Status 404 returned error can't find the container with id 3b73b163629fa502e921868df8cd950a0b863245f910c7a9e066da1e9ac99e47 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.679735 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.685948 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.693404 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="434da13f-30c5-4464-9b48-3d93ec7762d0" containerName="ovsdbserver-sb" containerID="cri-o://7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5" gracePeriod=300 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.709785 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovs-vswitchd" containerID="cri-o://493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" gracePeriod=28 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.740216 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.740689 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerName="cinder-api-log" containerID="cri-o://8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.741110 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerName="cinder-api" containerID="cri-o://d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: E0105 22:15:55.763280 5034 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 05 22:15:55 crc kubenswrapper[5034]: E0105 22:15:55.763416 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data podName:65a6b236-e04b-494a-a18e-5d1a8a5ae02a nodeName:}" failed. No retries permitted until 2026-01-05 22:15:57.763377738 +0000 UTC m=+1450.135377177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data") pod "rabbitmq-cell1-server-0" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a") : configmap "rabbitmq-cell1-config-data" not found Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.803502 5034 generic.go:334] "Generic (PLEG): container finished" podID="eda1f147-b2fb-4349-ba17-674073870a4b" containerID="cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17" exitCode=2 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.803687 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eda1f147-b2fb-4349-ba17-674073870a4b","Type":"ContainerDied","Data":"cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.811574 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-92frq"] Jan 05 22:15:55 crc kubenswrapper[5034]: E0105 22:15:55.812508 5034 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 05 22:15:55 crc kubenswrapper[5034]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 05 22:15:55 crc kubenswrapper[5034]: + source /usr/local/bin/container-scripts/functions Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNBridge=br-int Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNRemote=tcp:localhost:6642 Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNEncapType=geneve Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNAvailabilityZones= Jan 05 22:15:55 crc kubenswrapper[5034]: ++ EnableChassisAsGateway=true Jan 05 22:15:55 crc kubenswrapper[5034]: ++ PhysicalNetworks= Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNHostName= Jan 05 22:15:55 crc kubenswrapper[5034]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 05 22:15:55 crc kubenswrapper[5034]: ++ ovs_dir=/var/lib/openvswitch Jan 05 22:15:55 crc kubenswrapper[5034]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 05 22:15:55 crc kubenswrapper[5034]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 05 22:15:55 crc kubenswrapper[5034]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + cleanup_ovsdb_server_semaphore Jan 05 22:15:55 crc kubenswrapper[5034]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 05 22:15:55 crc kubenswrapper[5034]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 05 22:15:55 crc kubenswrapper[5034]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-v4mvr" message=< Jan 05 22:15:55 crc kubenswrapper[5034]: Exiting ovsdb-server (5) [ OK ] Jan 05 22:15:55 crc kubenswrapper[5034]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 05 22:15:55 crc kubenswrapper[5034]: + source /usr/local/bin/container-scripts/functions Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNBridge=br-int Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNRemote=tcp:localhost:6642 Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNEncapType=geneve Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNAvailabilityZones= Jan 05 22:15:55 crc kubenswrapper[5034]: ++ EnableChassisAsGateway=true Jan 05 22:15:55 crc kubenswrapper[5034]: ++ PhysicalNetworks= Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNHostName= Jan 05 22:15:55 crc kubenswrapper[5034]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 05 22:15:55 crc kubenswrapper[5034]: ++ ovs_dir=/var/lib/openvswitch Jan 05 22:15:55 crc kubenswrapper[5034]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 05 22:15:55 crc kubenswrapper[5034]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 05 22:15:55 crc kubenswrapper[5034]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + cleanup_ovsdb_server_semaphore Jan 05 22:15:55 crc kubenswrapper[5034]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 05 22:15:55 crc kubenswrapper[5034]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 05 22:15:55 crc kubenswrapper[5034]: > Jan 05 22:15:55 crc kubenswrapper[5034]: E0105 22:15:55.812558 5034 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 05 22:15:55 crc kubenswrapper[5034]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 05 22:15:55 crc kubenswrapper[5034]: + source /usr/local/bin/container-scripts/functions Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNBridge=br-int Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNRemote=tcp:localhost:6642 Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNEncapType=geneve Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNAvailabilityZones= Jan 05 22:15:55 crc kubenswrapper[5034]: ++ EnableChassisAsGateway=true Jan 05 22:15:55 crc kubenswrapper[5034]: ++ PhysicalNetworks= Jan 05 22:15:55 crc kubenswrapper[5034]: ++ OVNHostName= Jan 05 22:15:55 crc kubenswrapper[5034]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 05 22:15:55 crc kubenswrapper[5034]: ++ ovs_dir=/var/lib/openvswitch Jan 05 22:15:55 crc kubenswrapper[5034]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 05 22:15:55 crc kubenswrapper[5034]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 05 22:15:55 crc kubenswrapper[5034]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + sleep 0.5 Jan 05 22:15:55 crc kubenswrapper[5034]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 05 22:15:55 crc kubenswrapper[5034]: + cleanup_ovsdb_server_semaphore Jan 05 22:15:55 crc kubenswrapper[5034]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 05 22:15:55 crc kubenswrapper[5034]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 05 22:15:55 crc kubenswrapper[5034]: > pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server" containerID="cri-o://578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.812595 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server" containerID="cri-o://578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" gracePeriod=28 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.819344 5034 generic.go:334] "Generic (PLEG): container finished" podID="d8f99f63-df74-4392-a5fc-bf090571266f" containerID="f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b" exitCode=2 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.819445 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8f99f63-df74-4392-a5fc-bf090571266f","Type":"ContainerDied","Data":"f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.827194 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" containerName="rabbitmq" containerID="cri-o://6142e99eab6f8d5fa2aa4392f035c3a6396193c921db5594487e88a07ec633b0" gracePeriod=604800 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.829961 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-92frq"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.860360 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b6985bd-1df9-4935-9303-399e57584e90" path="/var/lib/kubelet/pods/2b6985bd-1df9-4935-9303-399e57584e90/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.861486 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dfdf242-b445-4031-948d-96047f780bc5" path="/var/lib/kubelet/pods/2dfdf242-b445-4031-948d-96047f780bc5/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862487 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ab947b-542c-4bd4-a4e9-493332d7caf5" path="/var/lib/kubelet/pods/32ab947b-542c-4bd4-a4e9-493332d7caf5/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862776 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862813 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862821 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862830 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862838 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862846 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862854 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862862 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862869 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862876 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.862882 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.863340 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33440247-28b2-4dbb-97ba-868cda48348e" path="/var/lib/kubelet/pods/33440247-28b2-4dbb-97ba-868cda48348e/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.864826 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492f33ff-82d7-4355-a412-faf4e879a228" path="/var/lib/kubelet/pods/492f33ff-82d7-4355-a412-faf4e879a228/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.864969 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config\") pod \"a63e68c4-06b7-4513-ac92-6415cbd75e88\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.865035 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-combined-ca-bundle\") pod \"a63e68c4-06b7-4513-ac92-6415cbd75e88\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.865064 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htn8n\" (UniqueName: \"kubernetes.io/projected/a63e68c4-06b7-4513-ac92-6415cbd75e88-kube-api-access-htn8n\") pod \"a63e68c4-06b7-4513-ac92-6415cbd75e88\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.865185 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config-secret\") pod \"a63e68c4-06b7-4513-ac92-6415cbd75e88\" (UID: \"a63e68c4-06b7-4513-ac92-6415cbd75e88\") " Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.866759 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5158186d-181d-498c-8eeb-c222566958f7" path="/var/lib/kubelet/pods/5158186d-181d-498c-8eeb-c222566958f7/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.867504 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ad742f5-9855-40e9-953f-fc2cf3baee89" path="/var/lib/kubelet/pods/7ad742f5-9855-40e9-953f-fc2cf3baee89/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.868153 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93443f38-a401-43ed-8ba6-7e0ebef66eb5" path="/var/lib/kubelet/pods/93443f38-a401-43ed-8ba6-7e0ebef66eb5/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.872282 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca996351-9e8b-45d0-91d2-7afc4c65f9cb" path="/var/lib/kubelet/pods/ca996351-9e8b-45d0-91d2-7afc4c65f9cb/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.873018 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91a5139-5537-4578-ab4f-67d52927afa9" path="/var/lib/kubelet/pods/e91a5139-5537-4578-ab4f-67d52927afa9/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.880628 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2741770-25b1-43ea-878d-f57b57e65fac" path="/var/lib/kubelet/pods/f2741770-25b1-43ea-878d-f57b57e65fac/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.881965 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea79208-89f2-486d-830a-d7ab3bab3342" path="/var/lib/kubelet/pods/fea79208-89f2-486d-830a-d7ab3bab3342/volumes" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.903583 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a63e68c4-06b7-4513-ac92-6415cbd75e88-kube-api-access-htn8n" (OuterVolumeSpecName: "kube-api-access-htn8n") pod "a63e68c4-06b7-4513-ac92-6415cbd75e88" (UID: "a63e68c4-06b7-4513-ac92-6415cbd75e88"). InnerVolumeSpecName "kube-api-access-htn8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926039 5034 generic.go:334] "Generic (PLEG): container finished" podID="d11dc2db-1f91-4ec6-9efd-333fcafface4" containerID="08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652" exitCode=143 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926472 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926501 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926529 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926539 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926549 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926561 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926578 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926656 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926671 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926680 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-6fzvz"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926694 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-6fzvz"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926711 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926739 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926753 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926765 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926777 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-664f75f5b6-lz6hv" event={"ID":"d11dc2db-1f91-4ec6-9efd-333fcafface4","Type":"ContainerDied","Data":"08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926790 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9182-account-create-update-w78zx"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.926825 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c458b9699-9b8w4"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.941932 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9182-account-create-update-w78zx"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.941963 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.942198 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" containerName="glance-log" containerID="cri-o://70ee7ba0dcf1db4ff6a1836f1e8d9db65589363dab8ad409a064b32b276d7892" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.942741 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-log" containerID="cri-o://3291212e6637ec18d4d97f490e2b13376227e40062c2978ce122475ac1ccee20" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.942821 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" containerName="glance-httpd" containerID="cri-o://d0c499f0a927479b340ab820f58c0578043492c63baf9a7836426b0f832cdd3a" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.943216 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-metadata" containerID="cri-o://1b21cd994fe20b3b5d50504e11ca0a8e875e6198715a73ad6b85c6ab13ae3f03" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.943315 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c458b9699-9b8w4" podUID="5b457464-69a5-4e13-88a9-9e23250402d1" containerName="neutron-api" containerID="cri-o://8a4ccd2cd507ddb6502cfdecb3eea7f0e3fcbcc526f6e6220ee67d322421fe39" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.943450 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c458b9699-9b8w4" podUID="5b457464-69a5-4e13-88a9-9e23250402d1" containerName="neutron-httpd" containerID="cri-o://3b923196a4d918a3fdfb27750f013d3bb48b93297dee98bc255d7c448bb47281" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.957706 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.958054 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" containerName="nova-api-log" containerID="cri-o://60e0dc06f5d11e4ea971c7f7cf856032cf0326af71dd5f35ab34721c3f181e11" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.958276 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" containerName="nova-api-api" containerID="cri-o://e71299f8473ea6e97ab3f521671935fa9ae99d0a935b71e63ce2ceb108169b56" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.968491 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htn8n\" (UniqueName: \"kubernetes.io/projected/a63e68c4-06b7-4513-ac92-6415cbd75e88-kube-api-access-htn8n\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.985532 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xzj87_9317f553-2101-4507-8f08-52e23105b5c1/openstack-network-exporter/0.log" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.985640 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xzj87" event={"ID":"9317f553-2101-4507-8f08-52e23105b5c1","Type":"ContainerDied","Data":"30e2967276fa8076789279ab9e5775ebc6a45cf363737fe4c6fbb99b755de869"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.985680 5034 scope.go:117] "RemoveContainer" containerID="b97d5cd29ffddf99657a5c2482efc985154a68be845fbf75fb23b805c3a393b7" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.985846 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xzj87" Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.991351 5034 generic.go:334] "Generic (PLEG): container finished" podID="8174d3dc-0931-484a-850f-3649234ef9fc" containerID="c5a11337c879be24ab652f59ee08fa45eea5831a7d805b7ba9723c0e7a770afa" exitCode=0 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.991529 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gbcl" event={"ID":"8174d3dc-0931-484a-850f-3649234ef9fc","Type":"ContainerDied","Data":"c5a11337c879be24ab652f59ee08fa45eea5831a7d805b7ba9723c0e7a770afa"} Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.996607 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b9b4698bd-747dm"] Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.997224 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b9b4698bd-747dm" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerName="barbican-api-log" containerID="cri-o://dc580afcb0d5964a6770e1805a08f6ab8ba168d592cf15a21e4431f5b1c61076" gracePeriod=30 Jan 05 22:15:55 crc kubenswrapper[5034]: I0105 22:15:55.997522 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b9b4698bd-747dm" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerName="barbican-api" containerID="cri-o://6800fbb56148e87618fda2df370bdf264e113c0b015622bf898f8261f8fafda1" gracePeriod=30 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.026537 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jbklw" event={"ID":"0623db6b-2e6a-4739-8c7f-ec9a98b51d93","Type":"ContainerStarted","Data":"3b73b163629fa502e921868df8cd950a0b863245f910c7a9e066da1e9ac99e47"} Jan 05 22:15:56 crc kubenswrapper[5034]: E0105 22:15:56.048774 5034 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 05 22:15:56 crc kubenswrapper[5034]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: if [ -n "placement" ]; then Jan 05 22:15:56 crc kubenswrapper[5034]: GRANT_DATABASE="placement" Jan 05 22:15:56 crc kubenswrapper[5034]: else Jan 05 22:15:56 crc kubenswrapper[5034]: GRANT_DATABASE="*" Jan 05 22:15:56 crc kubenswrapper[5034]: fi Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: # going for maximum compatibility here: Jan 05 22:15:56 crc kubenswrapper[5034]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 05 22:15:56 crc kubenswrapper[5034]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 05 22:15:56 crc kubenswrapper[5034]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 05 22:15:56 crc kubenswrapper[5034]: # support updates Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: $MYSQL_CMD < logger="UnhandledError" Jan 05 22:15:56 crc kubenswrapper[5034]: E0105 22:15:56.050410 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-8420-account-create-update-qgcgf" podUID="8d62b8ca-f71a-424f-bbee-cc709c382ba9" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.092244 5034 generic.go:334] "Generic (PLEG): container finished" podID="92d2026b-e43c-47d5-ad78-e532a664f033" containerID="20fbb538b44cc958c0860e6e3d037b5579a58d90268eae871a9788dbe68e60d7" exitCode=0 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.092730 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" event={"ID":"92d2026b-e43c-47d5-ad78-e532a664f033","Type":"ContainerDied","Data":"20fbb538b44cc958c0860e6e3d037b5579a58d90268eae871a9788dbe68e60d7"} Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.097139 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6d6dccdcd5-gglfm"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.097479 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" podUID="e86527c2-480f-4508-be25-9b2eab1f4274" containerName="barbican-worker-log" containerID="cri-o://ed46a153785fa5cc88884b8676fd407f393552edec4eb2fef58f1e35704d646a" gracePeriod=30 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.098055 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" podUID="e86527c2-480f-4508-be25-9b2eab1f4274" containerName="barbican-worker" containerID="cri-o://d6abfd2461105e8e1eea8e2d6a6889e3b27bf573f5c2e81d53d24675eaa17698" gracePeriod=30 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.113364 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-679959649b-bksnm"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.113629 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-679959649b-bksnm" podUID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" containerName="barbican-keystone-listener-log" containerID="cri-o://e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39" gracePeriod=30 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.113759 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-679959649b-bksnm" podUID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" containerName="barbican-keystone-listener" containerID="cri-o://f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399" gracePeriod=30 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.127338 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a63e68c4-06b7-4513-ac92-6415cbd75e88" (UID: "a63e68c4-06b7-4513-ac92-6415cbd75e88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.147573 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.148009 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" containerName="glance-log" containerID="cri-o://4ebce1e8d8500a36a9885aca5996773d3f25ae62e84300ef4e6448cbe1e4b976" gracePeriod=30 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.148181 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" containerName="glance-httpd" containerID="cri-o://c73e4953491ec9f47f29267b9f26809a0789ba4fdcd8a63a9120ac77e00f3874" gracePeriod=30 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.156362 5034 generic.go:334] "Generic (PLEG): container finished" podID="a63e68c4-06b7-4513-ac92-6415cbd75e88" containerID="da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419" exitCode=137 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.156435 5034 scope.go:117] "RemoveContainer" containerID="da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.156657 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 22:15:56 crc kubenswrapper[5034]: E0105 22:15:56.178873 5034 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 05 22:15:56 crc kubenswrapper[5034]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: if [ -n "nova_cell0" ]; then Jan 05 22:15:56 crc kubenswrapper[5034]: GRANT_DATABASE="nova_cell0" Jan 05 22:15:56 crc kubenswrapper[5034]: else Jan 05 22:15:56 crc kubenswrapper[5034]: GRANT_DATABASE="*" Jan 05 22:15:56 crc kubenswrapper[5034]: fi Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: # going for maximum compatibility here: Jan 05 22:15:56 crc kubenswrapper[5034]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 05 22:15:56 crc kubenswrapper[5034]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 05 22:15:56 crc kubenswrapper[5034]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 05 22:15:56 crc kubenswrapper[5034]: # support updates Jan 05 22:15:56 crc kubenswrapper[5034]: Jan 05 22:15:56 crc kubenswrapper[5034]: $MYSQL_CMD < logger="UnhandledError" Jan 05 22:15:56 crc kubenswrapper[5034]: E0105 22:15:56.180710 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" podUID="352134f8-9e6a-487a-8afd-b70ab941cd17" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.181052 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.191032 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a63e68c4-06b7-4513-ac92-6415cbd75e88" (UID: "a63e68c4-06b7-4513-ac92-6415cbd75e88"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.219283 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.227529 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-c82dz"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.264346 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jbklw"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.264699 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gbcl" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.276176 5034 scope.go:117] "RemoveContainer" containerID="da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419" Jan 05 22:15:56 crc kubenswrapper[5034]: E0105 22:15:56.280508 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419\": container with ID starting with da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419 not found: ID does not exist" containerID="da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.280553 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419"} err="failed to get container status \"da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419\": rpc error: code = NotFound desc = could not find container \"da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419\": container with ID starting with da693828e0d48a0c547ac0b8cb985c9bb0516e130dda8f939bd2a05113635419 not found: ID does not exist" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.283604 5034 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.286704 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-c82dz"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.315259 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.315567 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="80869f0d-0e2c-4235-b5e0-3519e6c95ded" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a" gracePeriod=30 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.344656 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8420-account-create-update-qgcgf"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.370483 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jw9kr"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384336 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-nb\") pod \"92d2026b-e43c-47d5-ad78-e532a664f033\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384392 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-config\") pod \"92d2026b-e43c-47d5-ad78-e532a664f033\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384458 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-swift-storage-0\") pod \"92d2026b-e43c-47d5-ad78-e532a664f033\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384498 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8174d3dc-0931-484a-850f-3649234ef9fc-scripts\") pod \"8174d3dc-0931-484a-850f-3649234ef9fc\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384571 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7kw8\" (UniqueName: \"kubernetes.io/projected/8174d3dc-0931-484a-850f-3649234ef9fc-kube-api-access-g7kw8\") pod \"8174d3dc-0931-484a-850f-3649234ef9fc\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384595 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-sb\") pod \"92d2026b-e43c-47d5-ad78-e532a664f033\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384620 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-svc\") pod \"92d2026b-e43c-47d5-ad78-e532a664f033\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384639 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-ovn-controller-tls-certs\") pod \"8174d3dc-0931-484a-850f-3649234ef9fc\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384661 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run-ovn\") pod \"8174d3dc-0931-484a-850f-3649234ef9fc\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384703 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run\") pod \"8174d3dc-0931-484a-850f-3649234ef9fc\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384807 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-log-ovn\") pod \"8174d3dc-0931-484a-850f-3649234ef9fc\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384845 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-combined-ca-bundle\") pod \"8174d3dc-0931-484a-850f-3649234ef9fc\" (UID: \"8174d3dc-0931-484a-850f-3649234ef9fc\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.384886 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhscw\" (UniqueName: \"kubernetes.io/projected/92d2026b-e43c-47d5-ad78-e532a664f033-kube-api-access-zhscw\") pod \"92d2026b-e43c-47d5-ad78-e532a664f033\" (UID: \"92d2026b-e43c-47d5-ad78-e532a664f033\") " Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.390545 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8174d3dc-0931-484a-850f-3649234ef9fc" (UID: "8174d3dc-0931-484a-850f-3649234ef9fc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.391767 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d2026b-e43c-47d5-ad78-e532a664f033-kube-api-access-zhscw" (OuterVolumeSpecName: "kube-api-access-zhscw") pod "92d2026b-e43c-47d5-ad78-e532a664f033" (UID: "92d2026b-e43c-47d5-ad78-e532a664f033"). InnerVolumeSpecName "kube-api-access-zhscw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.391974 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run" (OuterVolumeSpecName: "var-run") pod "8174d3dc-0931-484a-850f-3649234ef9fc" (UID: "8174d3dc-0931-484a-850f-3649234ef9fc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.392045 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8174d3dc-0931-484a-850f-3649234ef9fc" (UID: "8174d3dc-0931-484a-850f-3649234ef9fc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.393055 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8174d3dc-0931-484a-850f-3649234ef9fc-scripts" (OuterVolumeSpecName: "scripts") pod "8174d3dc-0931-484a-850f-3649234ef9fc" (UID: "8174d3dc-0931-484a-850f-3649234ef9fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.396738 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8nxg9"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.445236 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8nxg9"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.457522 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8174d3dc-0931-484a-850f-3649234ef9fc-kube-api-access-g7kw8" (OuterVolumeSpecName: "kube-api-access-g7kw8") pod "8174d3dc-0931-484a-850f-3649234ef9fc" (UID: "8174d3dc-0931-484a-850f-3649234ef9fc"). InnerVolumeSpecName "kube-api-access-g7kw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.488785 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7kw8\" (UniqueName: \"kubernetes.io/projected/8174d3dc-0931-484a-850f-3649234ef9fc-kube-api-access-g7kw8\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.488812 5034 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.488822 5034 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-run\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.488831 5034 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8174d3dc-0931-484a-850f-3649234ef9fc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.488843 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhscw\" (UniqueName: \"kubernetes.io/projected/92d2026b-e43c-47d5-ad78-e532a664f033-kube-api-access-zhscw\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.488853 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8174d3dc-0931-484a-850f-3649234ef9fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.491378 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jw9kr"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.554717 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6tszf"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.581524 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6tszf"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.581606 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a63e68c4-06b7-4513-ac92-6415cbd75e88" (UID: "a63e68c4-06b7-4513-ac92-6415cbd75e88"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.591481 5034 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a63e68c4-06b7-4513-ac92-6415cbd75e88-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.601930 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92d2026b-e43c-47d5-ad78-e532a664f033" (UID: "92d2026b-e43c-47d5-ad78-e532a664f033"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.620027 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.620487 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92d2026b-e43c-47d5-ad78-e532a664f033" (UID: "92d2026b-e43c-47d5-ad78-e532a664f033"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.636827 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6b89b"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.648703 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-pp9cm"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.657210 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6b89b"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.663908 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-f9vld"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.671314 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a004-account-create-update-csgjm"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.678252 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8174d3dc-0931-484a-850f-3649234ef9fc" (UID: "8174d3dc-0931-484a-850f-3649234ef9fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.680319 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-f9vld"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.687799 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" containerName="rabbitmq" containerID="cri-o://2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052" gracePeriod=604800 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.697209 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.697479 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="033973ad-b5ce-4136-92d2-0a2b976324db" containerName="nova-scheduler-scheduler" containerID="cri-o://d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c" gracePeriod=30 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.699447 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.699463 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.699473 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.733529 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a004-account-create-update-csgjm"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.775177 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "92d2026b-e43c-47d5-ad78-e532a664f033" (UID: "92d2026b-e43c-47d5-ad78-e532a664f033"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.776409 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" containerName="galera" containerID="cri-o://3960d5a24203a890e55b4c5a09107afdae62bb85f6aa67fa283d78bfd0a56edd" gracePeriod=30 Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.788519 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8420-account-create-update-qgcgf"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.803523 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-config" (OuterVolumeSpecName: "config") pod "92d2026b-e43c-47d5-ad78-e532a664f033" (UID: "92d2026b-e43c-47d5-ad78-e532a664f033"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.871836 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-pp9cm"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.878499 5034 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.878537 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.888545 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-xzj87"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.899782 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "8174d3dc-0931-484a-850f-3649234ef9fc" (UID: "8174d3dc-0931-484a-850f-3649234ef9fc"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.943270 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92d2026b-e43c-47d5-ad78-e532a664f033" (UID: "92d2026b-e43c-47d5-ad78-e532a664f033"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.970034 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-xzj87"] Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.990549 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92d2026b-e43c-47d5-ad78-e532a664f033-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:56 crc kubenswrapper[5034]: I0105 22:15:56.990712 5034 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8174d3dc-0931-484a-850f-3649234ef9fc-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.104411 5034 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.104773 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data podName:94526d3f-1e21-4eef-abb7-5cd05bfb1670 nodeName:}" failed. No retries permitted until 2026-01-05 22:16:01.104757654 +0000 UTC m=+1453.476757093 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data") pod "rabbitmq-server-0" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670") : configmap "rabbitmq-config-data" not found Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.145543 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d8f99f63-df74-4392-a5fc-bf090571266f/ovsdbserver-nb/0.log" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.145624 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.173169 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_434da13f-30c5-4464-9b48-3d93ec7762d0/ovsdbserver-sb/0.log" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.173253 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.175202 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jbklw" event={"ID":"0623db6b-2e6a-4739-8c7f-ec9a98b51d93","Type":"ContainerStarted","Data":"bee3983507957a52c6825ce5a11b7300dfc8f910469ec7d3ffd94dc08225c1ca"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.265233 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed" exitCode=0 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.265270 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542" exitCode=0 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.265278 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08" exitCode=0 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.265350 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.265374 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.265383 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.267426 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" event={"ID":"92d2026b-e43c-47d5-ad78-e532a664f033","Type":"ContainerDied","Data":"fbf9c2ee57c0fbb724ab9b3f165456eeb3791c73c139fa96562f088e6552688d"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.267469 5034 scope.go:117] "RemoveContainer" containerID="20fbb538b44cc958c0860e6e3d037b5579a58d90268eae871a9788dbe68e60d7" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.267776 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-gcdp2" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.279829 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-jbklw" podStartSLOduration=5.279805299 podStartE2EDuration="5.279805299s" podCreationTimestamp="2026-01-05 22:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:15:57.249834829 +0000 UTC m=+1449.621834268" watchObservedRunningTime="2026-01-05 22:15:57.279805299 +0000 UTC m=+1449.651804738" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.294073 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gbcl" event={"ID":"8174d3dc-0931-484a-850f-3649234ef9fc","Type":"ContainerDied","Data":"c3c5d34d6b66b0088f1ab0fef83d966d895f6283f80d360c3db0d7dbe3ab7b3c"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.294205 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gbcl" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.323798 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-config" (OuterVolumeSpecName: "config") pod "434da13f-30c5-4464-9b48-3d93ec7762d0" (UID: "434da13f-30c5-4464-9b48-3d93ec7762d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.323843 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-config\") pod \"434da13f-30c5-4464-9b48-3d93ec7762d0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.324120 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdbserver-nb-tls-certs\") pod \"d8f99f63-df74-4392-a5fc-bf090571266f\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.324150 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-config\") pod \"d8f99f63-df74-4392-a5fc-bf090571266f\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.324200 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-scripts\") pod \"434da13f-30c5-4464-9b48-3d93ec7762d0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.324809 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-config" (OuterVolumeSpecName: "config") pod "d8f99f63-df74-4392-a5fc-bf090571266f" (UID: "d8f99f63-df74-4392-a5fc-bf090571266f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.329040 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-metrics-certs-tls-certs\") pod \"d8f99f63-df74-4392-a5fc-bf090571266f\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.329188 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnhll\" (UniqueName: \"kubernetes.io/projected/d8f99f63-df74-4392-a5fc-bf090571266f-kube-api-access-rnhll\") pod \"d8f99f63-df74-4392-a5fc-bf090571266f\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.329269 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"d8f99f63-df74-4392-a5fc-bf090571266f\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.329307 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-scripts\") pod \"d8f99f63-df74-4392-a5fc-bf090571266f\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.329377 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdbserver-sb-tls-certs\") pod \"434da13f-30c5-4464-9b48-3d93ec7762d0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.329628 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-scripts" (OuterVolumeSpecName: "scripts") pod "434da13f-30c5-4464-9b48-3d93ec7762d0" (UID: "434da13f-30c5-4464-9b48-3d93ec7762d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.329644 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-combined-ca-bundle\") pod \"434da13f-30c5-4464-9b48-3d93ec7762d0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.329786 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdb-rundir\") pod \"d8f99f63-df74-4392-a5fc-bf090571266f\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.329850 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9px\" (UniqueName: \"kubernetes.io/projected/434da13f-30c5-4464-9b48-3d93ec7762d0-kube-api-access-fw9px\") pod \"434da13f-30c5-4464-9b48-3d93ec7762d0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.329931 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdb-rundir\") pod \"434da13f-30c5-4464-9b48-3d93ec7762d0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.330019 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-metrics-certs-tls-certs\") pod \"434da13f-30c5-4464-9b48-3d93ec7762d0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.330043 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-combined-ca-bundle\") pod \"d8f99f63-df74-4392-a5fc-bf090571266f\" (UID: \"d8f99f63-df74-4392-a5fc-bf090571266f\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.330202 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"434da13f-30c5-4464-9b48-3d93ec7762d0\" (UID: \"434da13f-30c5-4464-9b48-3d93ec7762d0\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.332162 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.332183 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.332195 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/434da13f-30c5-4464-9b48-3d93ec7762d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.334411 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-scripts" (OuterVolumeSpecName: "scripts") pod "d8f99f63-df74-4392-a5fc-bf090571266f" (UID: "d8f99f63-df74-4392-a5fc-bf090571266f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.338200 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "d8f99f63-df74-4392-a5fc-bf090571266f" (UID: "d8f99f63-df74-4392-a5fc-bf090571266f"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.338268 5034 generic.go:334] "Generic (PLEG): container finished" podID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" containerID="e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39" exitCode=143 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.338483 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-679959649b-bksnm" event={"ID":"ff813b46-2db4-46af-ad1b-3e84fcb8e33b","Type":"ContainerDied","Data":"e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.350629 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "434da13f-30c5-4464-9b48-3d93ec7762d0" (UID: "434da13f-30c5-4464-9b48-3d93ec7762d0"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.351020 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-gcdp2"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.362217 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "434da13f-30c5-4464-9b48-3d93ec7762d0" (UID: "434da13f-30c5-4464-9b48-3d93ec7762d0"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.363966 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434da13f-30c5-4464-9b48-3d93ec7762d0-kube-api-access-fw9px" (OuterVolumeSpecName: "kube-api-access-fw9px") pod "434da13f-30c5-4464-9b48-3d93ec7762d0" (UID: "434da13f-30c5-4464-9b48-3d93ec7762d0"). InnerVolumeSpecName "kube-api-access-fw9px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.364781 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f99f63-df74-4392-a5fc-bf090571266f-kube-api-access-rnhll" (OuterVolumeSpecName: "kube-api-access-rnhll") pod "d8f99f63-df74-4392-a5fc-bf090571266f" (UID: "d8f99f63-df74-4392-a5fc-bf090571266f"). InnerVolumeSpecName "kube-api-access-rnhll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.384233 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "d8f99f63-df74-4392-a5fc-bf090571266f" (UID: "d8f99f63-df74-4392-a5fc-bf090571266f"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.387030 5034 scope.go:117] "RemoveContainer" containerID="bc6219970b576897171a628683199691015abe779dd7aae6b57bd79340d78bfb" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.394203 5034 generic.go:334] "Generic (PLEG): container finished" podID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" exitCode=0 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.394503 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v4mvr" event={"ID":"a4f67d51-b26b-44be-beba-ea5874fe6375","Type":"ContainerDied","Data":"578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.418474 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-gcdp2"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.428421 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sh4vf"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.436303 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.436405 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9px\" (UniqueName: \"kubernetes.io/projected/434da13f-30c5-4464-9b48-3d93ec7762d0-kube-api-access-fw9px\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.436462 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.436531 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.436596 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnhll\" (UniqueName: \"kubernetes.io/projected/d8f99f63-df74-4392-a5fc-bf090571266f-kube-api-access-rnhll\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.436654 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.436723 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8f99f63-df74-4392-a5fc-bf090571266f-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.470944 5034 generic.go:334] "Generic (PLEG): container finished" podID="e86527c2-480f-4508-be25-9b2eab1f4274" containerID="ed46a153785fa5cc88884b8676fd407f393552edec4eb2fef58f1e35704d646a" exitCode=143 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.471048 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" event={"ID":"e86527c2-480f-4508-be25-9b2eab1f4274","Type":"ContainerDied","Data":"ed46a153785fa5cc88884b8676fd407f393552edec4eb2fef58f1e35704d646a"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.485048 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_434da13f-30c5-4464-9b48-3d93ec7762d0/ovsdbserver-sb/0.log" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.485110 5034 generic.go:334] "Generic (PLEG): container finished" podID="434da13f-30c5-4464-9b48-3d93ec7762d0" containerID="1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833" exitCode=2 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.485124 5034 generic.go:334] "Generic (PLEG): container finished" podID="434da13f-30c5-4464-9b48-3d93ec7762d0" containerID="7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5" exitCode=143 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.485164 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"434da13f-30c5-4464-9b48-3d93ec7762d0","Type":"ContainerDied","Data":"1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.485191 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"434da13f-30c5-4464-9b48-3d93ec7762d0","Type":"ContainerDied","Data":"7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.485200 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"434da13f-30c5-4464-9b48-3d93ec7762d0","Type":"ContainerDied","Data":"26b277e2719d275282d27a67562c85870055ba822efb900a1481e1687f335ed3"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.485264 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.486991 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "434da13f-30c5-4464-9b48-3d93ec7762d0" (UID: "434da13f-30c5-4464-9b48-3d93ec7762d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.491216 5034 scope.go:117] "RemoveContainer" containerID="c5a11337c879be24ab652f59ee08fa45eea5831a7d805b7ba9723c0e7a770afa" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.506956 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.507287 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="a4a7982e-25f8-4f97-9db5-1c828835ae84" containerName="nova-cell1-conductor-conductor" containerID="cri-o://bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a" gracePeriod=30 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.514878 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8f99f63-df74-4392-a5fc-bf090571266f" (UID: "d8f99f63-df74-4392-a5fc-bf090571266f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.517761 5034 generic.go:334] "Generic (PLEG): container finished" podID="eaa3282d-5044-490b-be8e-5b721c49d338" containerID="60e0dc06f5d11e4ea971c7f7cf856032cf0326af71dd5f35ab34721c3f181e11" exitCode=143 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.517858 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eaa3282d-5044-490b-be8e-5b721c49d338","Type":"ContainerDied","Data":"60e0dc06f5d11e4ea971c7f7cf856032cf0326af71dd5f35ab34721c3f181e11"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.519891 5034 generic.go:334] "Generic (PLEG): container finished" podID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerID="8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c" exitCode=143 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.519980 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54834f39-7569-4cf3-812d-2c6d1bd161b8","Type":"ContainerDied","Data":"8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.521942 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sh4vf"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.539488 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.539521 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.539541 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-72xjf"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.539568 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.539724 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="52dac0d7-1025-49a8-8130-1f0d5050331c" containerName="nova-cell0-conductor-conductor" containerID="cri-o://32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8" gracePeriod=30 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.553161 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-72xjf"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.564640 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.564722 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4gbcl"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.567866 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4gbcl"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.582254 5034 generic.go:334] "Generic (PLEG): container finished" podID="5b457464-69a5-4e13-88a9-9e23250402d1" containerID="3b923196a4d918a3fdfb27750f013d3bb48b93297dee98bc255d7c448bb47281" exitCode=0 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.582423 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c458b9699-9b8w4" event={"ID":"5b457464-69a5-4e13-88a9-9e23250402d1","Type":"ContainerDied","Data":"3b923196a4d918a3fdfb27750f013d3bb48b93297dee98bc255d7c448bb47281"} Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.585952 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.587656 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.591720 5034 generic.go:334] "Generic (PLEG): container finished" podID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" containerID="70ee7ba0dcf1db4ff6a1836f1e8d9db65589363dab8ad409a064b32b276d7892" exitCode=143 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.591788 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc1f97e4-be98-4c2a-b819-17d9c3b0be51","Type":"ContainerDied","Data":"70ee7ba0dcf1db4ff6a1836f1e8d9db65589363dab8ad409a064b32b276d7892"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.594913 5034 generic.go:334] "Generic (PLEG): container finished" podID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerID="dc580afcb0d5964a6770e1805a08f6ab8ba168d592cf15a21e4431f5b1c61076" exitCode=143 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.594992 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9b4698bd-747dm" event={"ID":"6c0c6abd-9d45-4022-aca3-5e63949d1aab","Type":"ContainerDied","Data":"dc580afcb0d5964a6770e1805a08f6ab8ba168d592cf15a21e4431f5b1c61076"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.596239 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8420-account-create-update-qgcgf" event={"ID":"8d62b8ca-f71a-424f-bbee-cc709c382ba9","Type":"ContainerStarted","Data":"4ddb9223afeef6f557040dbbc0b598db3f06433c21c9205166dcf6943df21e57"} Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.602420 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.623026 5034 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 05 22:15:57 crc kubenswrapper[5034]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: if [ -n "placement" ]; then Jan 05 22:15:57 crc kubenswrapper[5034]: GRANT_DATABASE="placement" Jan 05 22:15:57 crc kubenswrapper[5034]: else Jan 05 22:15:57 crc kubenswrapper[5034]: GRANT_DATABASE="*" Jan 05 22:15:57 crc kubenswrapper[5034]: fi Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: # going for maximum compatibility here: Jan 05 22:15:57 crc kubenswrapper[5034]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 05 22:15:57 crc kubenswrapper[5034]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 05 22:15:57 crc kubenswrapper[5034]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 05 22:15:57 crc kubenswrapper[5034]: # support updates Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: $MYSQL_CMD < logger="UnhandledError" Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.626239 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.626393 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" containerName="ovn-northd" Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.626631 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-8420-account-create-update-qgcgf" podUID="8d62b8ca-f71a-424f-bbee-cc709c382ba9" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.627507 5034 generic.go:334] "Generic (PLEG): container finished" podID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" containerID="4ebce1e8d8500a36a9885aca5996773d3f25ae62e84300ef4e6448cbe1e4b976" exitCode=143 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.627642 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c2c8ddc-f82a-4cca-8a84-90c5713754cf","Type":"ContainerDied","Data":"4ebce1e8d8500a36a9885aca5996773d3f25ae62e84300ef4e6448cbe1e4b976"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.654250 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.654283 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.660114 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" event={"ID":"352134f8-9e6a-487a-8afd-b70ab941cd17","Type":"ContainerStarted","Data":"e2a5774ce1513037a5b2e7ac0f9ca0d32135941d26e81fb7876227a273192a91"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.673178 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d8f99f63-df74-4392-a5fc-bf090571266f/ovsdbserver-nb/0.log" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.673348 5034 generic.go:334] "Generic (PLEG): container finished" podID="d8f99f63-df74-4392-a5fc-bf090571266f" containerID="2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7" exitCode=143 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.674186 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.674706 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8f99f63-df74-4392-a5fc-bf090571266f","Type":"ContainerDied","Data":"2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.674743 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8f99f63-df74-4392-a5fc-bf090571266f","Type":"ContainerDied","Data":"7b1e6d0e3da9cefaef92129e0975e76fe0a76f33fef0b719f39837796a04b3dd"} Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.681885 5034 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 05 22:15:57 crc kubenswrapper[5034]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: if [ -n "nova_cell0" ]; then Jan 05 22:15:57 crc kubenswrapper[5034]: GRANT_DATABASE="nova_cell0" Jan 05 22:15:57 crc kubenswrapper[5034]: else Jan 05 22:15:57 crc kubenswrapper[5034]: GRANT_DATABASE="*" Jan 05 22:15:57 crc kubenswrapper[5034]: fi Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: # going for maximum compatibility here: Jan 05 22:15:57 crc kubenswrapper[5034]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 05 22:15:57 crc kubenswrapper[5034]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 05 22:15:57 crc kubenswrapper[5034]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 05 22:15:57 crc kubenswrapper[5034]: # support updates Jan 05 22:15:57 crc kubenswrapper[5034]: Jan 05 22:15:57 crc kubenswrapper[5034]: $MYSQL_CMD < logger="UnhandledError" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.684273 5034 scope.go:117] "RemoveContainer" containerID="1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833" Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.685658 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" podUID="352134f8-9e6a-487a-8afd-b70ab941cd17" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.701711 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "d8f99f63-df74-4392-a5fc-bf090571266f" (UID: "d8f99f63-df74-4392-a5fc-bf090571266f"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.704488 5034 generic.go:334] "Generic (PLEG): container finished" podID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerID="3291212e6637ec18d4d97f490e2b13376227e40062c2978ce122475ac1ccee20" exitCode=143 Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.704637 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11669bb7-2e25-4817-a4e8-a487ea5b90cb","Type":"ContainerDied","Data":"3291212e6637ec18d4d97f490e2b13376227e40062c2978ce122475ac1ccee20"} Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.707183 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.743496 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "d8f99f63-df74-4392-a5fc-bf090571266f" (UID: "d8f99f63-df74-4392-a5fc-bf090571266f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.756744 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.756780 5034 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f99f63-df74-4392-a5fc-bf090571266f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.765505 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "434da13f-30c5-4464-9b48-3d93ec7762d0" (UID: "434da13f-30c5-4464-9b48-3d93ec7762d0"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.773155 5034 scope.go:117] "RemoveContainer" containerID="7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.783695 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "434da13f-30c5-4464-9b48-3d93ec7762d0" (UID: "434da13f-30c5-4464-9b48-3d93ec7762d0"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.838485 5034 scope.go:117] "RemoveContainer" containerID="1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833" Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.842344 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833\": container with ID starting with 1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833 not found: ID does not exist" containerID="1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.842390 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833"} err="failed to get container status \"1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833\": rpc error: code = NotFound desc = could not find container \"1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833\": container with ID starting with 1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833 not found: ID does not exist" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.842420 5034 scope.go:117] "RemoveContainer" containerID="7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5" Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.856629 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5\": container with ID starting with 7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5 not found: ID does not exist" containerID="7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.856685 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5"} err="failed to get container status \"7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5\": rpc error: code = NotFound desc = could not find container \"7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5\": container with ID starting with 7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5 not found: ID does not exist" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.856714 5034 scope.go:117] "RemoveContainer" containerID="1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.857592 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc2kj\" (UniqueName: \"kubernetes.io/projected/80869f0d-0e2c-4235-b5e0-3519e6c95ded-kube-api-access-rc2kj\") pod \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.857625 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-vencrypt-tls-certs\") pod \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.857787 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-config-data\") pod \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.857889 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-combined-ca-bundle\") pod \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.857934 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-nova-novncproxy-tls-certs\") pod \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\" (UID: \"80869f0d-0e2c-4235-b5e0-3519e6c95ded\") " Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.858419 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.858439 5034 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/434da13f-30c5-4464-9b48-3d93ec7762d0-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.858495 5034 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 05 22:15:57 crc kubenswrapper[5034]: E0105 22:15:57.858543 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data podName:65a6b236-e04b-494a-a18e-5d1a8a5ae02a nodeName:}" failed. No retries permitted until 2026-01-05 22:16:01.858527133 +0000 UTC m=+1454.230526572 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data") pod "rabbitmq-cell1-server-0" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a") : configmap "rabbitmq-cell1-config-data" not found Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.865281 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833"} err="failed to get container status \"1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833\": rpc error: code = NotFound desc = could not find container \"1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833\": container with ID starting with 1005f9f57d966f210046c288991b3ec5114ca9f72c6939653adfb7abf3db2833 not found: ID does not exist" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.865360 5034 scope.go:117] "RemoveContainer" containerID="7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.875030 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5"} err="failed to get container status \"7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5\": rpc error: code = NotFound desc = could not find container \"7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5\": container with ID starting with 7a614ac9001d9e92909833c9210dde0c5ac387854637951befd4cca13443cce5 not found: ID does not exist" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.875526 5034 scope.go:117] "RemoveContainer" containerID="f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.887435 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80869f0d-0e2c-4235-b5e0-3519e6c95ded-kube-api-access-rc2kj" (OuterVolumeSpecName: "kube-api-access-rc2kj") pod "80869f0d-0e2c-4235-b5e0-3519e6c95ded" (UID: "80869f0d-0e2c-4235-b5e0-3519e6c95ded"). InnerVolumeSpecName "kube-api-access-rc2kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.928210 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80869f0d-0e2c-4235-b5e0-3519e6c95ded" (UID: "80869f0d-0e2c-4235-b5e0-3519e6c95ded"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.964223 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "80869f0d-0e2c-4235-b5e0-3519e6c95ded" (UID: "80869f0d-0e2c-4235-b5e0-3519e6c95ded"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.979391 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc2kj\" (UniqueName: \"kubernetes.io/projected/80869f0d-0e2c-4235-b5e0-3519e6c95ded-kube-api-access-rc2kj\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.979440 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:57 crc kubenswrapper[5034]: I0105 22:15:57.979452 5034 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.025526 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-config-data" (OuterVolumeSpecName: "config-data") pod "80869f0d-0e2c-4235-b5e0-3519e6c95ded" (UID: "80869f0d-0e2c-4235-b5e0-3519e6c95ded"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.065113 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "80869f0d-0e2c-4235-b5e0-3519e6c95ded" (UID: "80869f0d-0e2c-4235-b5e0-3519e6c95ded"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.081531 5034 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.081557 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80869f0d-0e2c-4235-b5e0-3519e6c95ded-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.087898 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="399b297f-2aeb-4859-b528-72ff3213bdcc" path="/var/lib/kubelet/pods/399b297f-2aeb-4859-b528-72ff3213bdcc/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.088516 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4764f8ba-949a-4792-9cd1-2aae9c0a7d92" path="/var/lib/kubelet/pods/4764f8ba-949a-4792-9cd1-2aae9c0a7d92/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.089218 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6235a7f3-12fc-455a-a4ba-a09957646334" path="/var/lib/kubelet/pods/6235a7f3-12fc-455a-a4ba-a09957646334/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.089912 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65d4ede5-3c50-4cfe-a1aa-276ef430fe97" path="/var/lib/kubelet/pods/65d4ede5-3c50-4cfe-a1aa-276ef430fe97/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.091263 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8174d3dc-0931-484a-850f-3649234ef9fc" path="/var/lib/kubelet/pods/8174d3dc-0931-484a-850f-3649234ef9fc/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.099842 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d2026b-e43c-47d5-ad78-e532a664f033" path="/var/lib/kubelet/pods/92d2026b-e43c-47d5-ad78-e532a664f033/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.103645 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9317f553-2101-4507-8f08-52e23105b5c1" path="/var/lib/kubelet/pods/9317f553-2101-4507-8f08-52e23105b5c1/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.104697 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b64a7d9-a1a5-4d7e-9012-c770d15f4267" path="/var/lib/kubelet/pods/9b64a7d9-a1a5-4d7e-9012-c770d15f4267/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.105384 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1b470b5-b9ec-4d92-8965-9c0be5366721" path="/var/lib/kubelet/pods/a1b470b5-b9ec-4d92-8965-9c0be5366721/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.110889 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5121c9a-d20b-4f58-b7f1-58852b2f4e1b" path="/var/lib/kubelet/pods/a5121c9a-d20b-4f58-b7f1-58852b2f4e1b/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.111722 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a63e68c4-06b7-4513-ac92-6415cbd75e88" path="/var/lib/kubelet/pods/a63e68c4-06b7-4513-ac92-6415cbd75e88/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.113606 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08a0a0a-78db-4b23-b4bd-15c14d70c14a" path="/var/lib/kubelet/pods/b08a0a0a-78db-4b23-b4bd-15c14d70c14a/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.131775 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d0aa4e-b8be-479d-8583-bb3cd2a245f2" path="/var/lib/kubelet/pods/d2d0aa4e-b8be-479d-8583-bb3cd2a245f2/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.132503 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7e5dc1-c4d9-4481-a667-3cdf0a550f25" path="/var/lib/kubelet/pods/ef7e5dc1-c4d9-4481-a667-3cdf0a550f25/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.142490 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3" path="/var/lib/kubelet/pods/f57c5ba3-f839-4b17-8c4e-4fc0c5703cf3/volumes" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.143759 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.143788 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.215717 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6576bc4c77-zzdbj"] Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.238808 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6576bc4c77-zzdbj" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerName="proxy-httpd" containerID="cri-o://f1b3c35f63294d582fd4b25a3dbec8507d92d98c35739809ed98988da8b876c1" gracePeriod=30 Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.239302 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6576bc4c77-zzdbj" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerName="proxy-server" containerID="cri-o://ac34648fd83cfc034ae26bdfef9f0975cae1c33e8e010cf1a87f4c7182f7691b" gracePeriod=30 Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.302452 5034 scope.go:117] "RemoveContainer" containerID="2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.316465 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.346563 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.357537 5034 scope.go:117] "RemoveContainer" containerID="f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b" Jan 05 22:15:58 crc kubenswrapper[5034]: E0105 22:15:58.358117 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b\": container with ID starting with f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b not found: ID does not exist" containerID="f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.358151 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b"} err="failed to get container status \"f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b\": rpc error: code = NotFound desc = could not find container \"f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b\": container with ID starting with f722c8cc41474947634327b902a2afc3f11519183f852425265bef696a182c9b not found: ID does not exist" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.358174 5034 scope.go:117] "RemoveContainer" containerID="2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7" Jan 05 22:15:58 crc kubenswrapper[5034]: E0105 22:15:58.358353 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7\": container with ID starting with 2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7 not found: ID does not exist" containerID="2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.358371 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7"} err="failed to get container status \"2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7\": rpc error: code = NotFound desc = could not find container \"2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7\": container with ID starting with 2820e9acfadebb62e26c390ed61cbc173de51cf6c9c7b12869dc4aeeb282fdb7 not found: ID does not exist" Jan 05 22:15:58 crc kubenswrapper[5034]: E0105 22:15:58.632593 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 05 22:15:58 crc kubenswrapper[5034]: E0105 22:15:58.649549 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 05 22:15:58 crc kubenswrapper[5034]: E0105 22:15:58.657706 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 05 22:15:58 crc kubenswrapper[5034]: E0105 22:15:58.657829 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="a4a7982e-25f8-4f97-9db5-1c828835ae84" containerName="nova-cell1-conductor-conductor" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.807850 5034 generic.go:334] "Generic (PLEG): container finished" podID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" containerID="bee3983507957a52c6825ce5a11b7300dfc8f910469ec7d3ffd94dc08225c1ca" exitCode=1 Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.807925 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jbklw" event={"ID":"0623db6b-2e6a-4739-8c7f-ec9a98b51d93","Type":"ContainerDied","Data":"bee3983507957a52c6825ce5a11b7300dfc8f910469ec7d3ffd94dc08225c1ca"} Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.808629 5034 scope.go:117] "RemoveContainer" containerID="bee3983507957a52c6825ce5a11b7300dfc8f910469ec7d3ffd94dc08225c1ca" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.868445 5034 generic.go:334] "Generic (PLEG): container finished" podID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" containerID="3960d5a24203a890e55b4c5a09107afdae62bb85f6aa67fa283d78bfd0a56edd" exitCode=0 Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.868512 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bb0c349d-e74e-49eb-ba86-8a435d15ba66","Type":"ContainerDied","Data":"3960d5a24203a890e55b4c5a09107afdae62bb85f6aa67fa283d78bfd0a56edd"} Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.872522 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.889743 5034 generic.go:334] "Generic (PLEG): container finished" podID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" containerID="f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399" exitCode=0 Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.889802 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-679959649b-bksnm" event={"ID":"ff813b46-2db4-46af-ad1b-3e84fcb8e33b","Type":"ContainerDied","Data":"f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399"} Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.889829 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-679959649b-bksnm" event={"ID":"ff813b46-2db4-46af-ad1b-3e84fcb8e33b","Type":"ContainerDied","Data":"525589c0f54e7d0cf5eb3db05a066b702d75de6747488a1105c0ac92a8bf0343"} Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.889847 5034 scope.go:117] "RemoveContainer" containerID="f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.889935 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-679959649b-bksnm" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.976748 5034 generic.go:334] "Generic (PLEG): container finished" podID="80869f0d-0e2c-4235-b5e0-3519e6c95ded" containerID="b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a" exitCode=0 Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.977207 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.980507 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"80869f0d-0e2c-4235-b5e0-3519e6c95ded","Type":"ContainerDied","Data":"b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a"} Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.980555 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"80869f0d-0e2c-4235-b5e0-3519e6c95ded","Type":"ContainerDied","Data":"04457f4471c2abc6e3434d2c02f34f827bc58799ec86f78be488f5871d72ba26"} Jan 05 22:15:58 crc kubenswrapper[5034]: I0105 22:15:58.991510 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.024953 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwjj4\" (UniqueName: \"kubernetes.io/projected/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-kube-api-access-lwjj4\") pod \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.025070 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-combined-ca-bundle\") pod \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.025230 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data-custom\") pod \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.025258 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data\") pod \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.025283 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-logs\") pod \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\" (UID: \"ff813b46-2db4-46af-ad1b-3e84fcb8e33b\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.028232 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-logs" (OuterVolumeSpecName: "logs") pod "ff813b46-2db4-46af-ad1b-3e84fcb8e33b" (UID: "ff813b46-2db4-46af-ad1b-3e84fcb8e33b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.040233 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ff813b46-2db4-46af-ad1b-3e84fcb8e33b" (UID: "ff813b46-2db4-46af-ad1b-3e84fcb8e33b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.045400 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-kube-api-access-lwjj4" (OuterVolumeSpecName: "kube-api-access-lwjj4") pod "ff813b46-2db4-46af-ad1b-3e84fcb8e33b" (UID: "ff813b46-2db4-46af-ad1b-3e84fcb8e33b"). InnerVolumeSpecName "kube-api-access-lwjj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.063632 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.089543 5034 scope.go:117] "RemoveContainer" containerID="e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.117656 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.121576 5034 generic.go:334] "Generic (PLEG): container finished" podID="e86527c2-480f-4508-be25-9b2eab1f4274" containerID="d6abfd2461105e8e1eea8e2d6a6889e3b27bf573f5c2e81d53d24675eaa17698" exitCode=0 Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.121693 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" event={"ID":"e86527c2-480f-4508-be25-9b2eab1f4274","Type":"ContainerDied","Data":"d6abfd2461105e8e1eea8e2d6a6889e3b27bf573f5c2e81d53d24675eaa17698"} Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.127052 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.127132 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-operator-scripts\") pod \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.127250 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-default\") pod \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.127323 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-galera-tls-certs\") pod \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.127367 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kolla-config\") pod \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.127426 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-generated\") pod \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.127474 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-combined-ca-bundle\") pod \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.127670 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhxw7\" (UniqueName: \"kubernetes.io/projected/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kube-api-access-vhxw7\") pod \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\" (UID: \"bb0c349d-e74e-49eb-ba86-8a435d15ba66\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.128058 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.128071 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.128094 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwjj4\" (UniqueName: \"kubernetes.io/projected/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-kube-api-access-lwjj4\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.129475 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "bb0c349d-e74e-49eb-ba86-8a435d15ba66" (UID: "bb0c349d-e74e-49eb-ba86-8a435d15ba66"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.131402 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "bb0c349d-e74e-49eb-ba86-8a435d15ba66" (UID: "bb0c349d-e74e-49eb-ba86-8a435d15ba66"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.133177 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb0c349d-e74e-49eb-ba86-8a435d15ba66" (UID: "bb0c349d-e74e-49eb-ba86-8a435d15ba66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.133618 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "bb0c349d-e74e-49eb-ba86-8a435d15ba66" (UID: "bb0c349d-e74e-49eb-ba86-8a435d15ba66"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.134450 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kube-api-access-vhxw7" (OuterVolumeSpecName: "kube-api-access-vhxw7") pod "bb0c349d-e74e-49eb-ba86-8a435d15ba66" (UID: "bb0c349d-e74e-49eb-ba86-8a435d15ba66"). InnerVolumeSpecName "kube-api-access-vhxw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.139659 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": read tcp 10.217.0.2:57642->10.217.0.164:8776: read: connection reset by peer" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.158591 5034 scope.go:117] "RemoveContainer" containerID="f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399" Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.172648 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399\": container with ID starting with f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399 not found: ID does not exist" containerID="f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.172686 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399"} err="failed to get container status \"f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399\": rpc error: code = NotFound desc = could not find container \"f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399\": container with ID starting with f42ceddb90744bafde8510ed4fc1b0dcd5ad3d5f331a8ca5b605ad38d9db9399 not found: ID does not exist" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.172712 5034 scope.go:117] "RemoveContainer" containerID="e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39" Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.173156 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39\": container with ID starting with e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39 not found: ID does not exist" containerID="e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.173172 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39"} err="failed to get container status \"e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39\": rpc error: code = NotFound desc = could not find container \"e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39\": container with ID starting with e661d52a0677b9657dd09e559a9835237ca370908981565601f005930ded1a39 not found: ID does not exist" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.173189 5034 scope.go:117] "RemoveContainer" containerID="b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.175603 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "mysql-db") pod "bb0c349d-e74e-49eb-ba86-8a435d15ba66" (UID: "bb0c349d-e74e-49eb-ba86-8a435d15ba66"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.178619 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff813b46-2db4-46af-ad1b-3e84fcb8e33b" (UID: "ff813b46-2db4-46af-ad1b-3e84fcb8e33b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.193102 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3a3c79c1-b936-44a0-bca1-68f7d69d8fab","Type":"ContainerDied","Data":"72960a55513e9ecef8e41d52208d6be80b39479282ae6e2f1bc413cfa48dcc2f"} Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.193160 5034 generic.go:334] "Generic (PLEG): container finished" podID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" containerID="72960a55513e9ecef8e41d52208d6be80b39479282ae6e2f1bc413cfa48dcc2f" exitCode=0 Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.238026 5034 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.238057 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.238068 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.238098 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhxw7\" (UniqueName: \"kubernetes.io/projected/bb0c349d-e74e-49eb-ba86-8a435d15ba66-kube-api-access-vhxw7\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.238127 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.238142 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.238155 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bb0c349d-e74e-49eb-ba86-8a435d15ba66-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.281205 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data" (OuterVolumeSpecName: "config-data") pod "ff813b46-2db4-46af-ad1b-3e84fcb8e33b" (UID: "ff813b46-2db4-46af-ad1b-3e84fcb8e33b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.284389 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.288279 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb0c349d-e74e-49eb-ba86-8a435d15ba66" (UID: "bb0c349d-e74e-49eb-ba86-8a435d15ba66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.342178 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff813b46-2db4-46af-ad1b-3e84fcb8e33b-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.342216 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.342227 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.346256 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "bb0c349d-e74e-49eb-ba86-8a435d15ba66" (UID: "bb0c349d-e74e-49eb-ba86-8a435d15ba66"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.462282 5034 scope.go:117] "RemoveContainer" containerID="b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.465480 5034 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0c349d-e74e-49eb-ba86-8a435d15ba66-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.471186 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a\": container with ID starting with b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a not found: ID does not exist" containerID="b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.471223 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a"} err="failed to get container status \"b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a\": rpc error: code = NotFound desc = could not find container \"b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a\": container with ID starting with b1b5a1452f07c4f532a171757f970d19183f809460e7e8b48e828295d1d6258a not found: ID does not exist" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.568964 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": dial tcp 10.217.0.205:8775: connect: connection refused" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.569230 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": dial tcp 10.217.0.205:8775: connect: connection refused" Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.654484 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.654946 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.655429 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.655461 5034 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server" Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.670231 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.694692 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.703490 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:15:59 crc kubenswrapper[5034]: E0105 22:15:59.703551 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovs-vswitchd" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.781292 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.789928 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9b4698bd-747dm" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:39546->10.217.0.160:9311: read: connection reset by peer" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.789959 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9b4698bd-747dm" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:39542->10.217.0.160:9311: read: connection reset by peer" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.802646 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-679959649b-bksnm"] Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.811769 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-679959649b-bksnm"] Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.881490 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="434da13f-30c5-4464-9b48-3d93ec7762d0" path="/var/lib/kubelet/pods/434da13f-30c5-4464-9b48-3d93ec7762d0/volumes" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.882300 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80869f0d-0e2c-4235-b5e0-3519e6c95ded" path="/var/lib/kubelet/pods/80869f0d-0e2c-4235-b5e0-3519e6c95ded/volumes" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.883016 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f99f63-df74-4392-a5fc-bf090571266f" path="/var/lib/kubelet/pods/d8f99f63-df74-4392-a5fc-bf090571266f/volumes" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.887454 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" path="/var/lib/kubelet/pods/ff813b46-2db4-46af-ad1b-3e84fcb8e33b/volumes" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.948586 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.974993 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data-custom\") pod \"e86527c2-480f-4508-be25-9b2eab1f4274\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.975055 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-combined-ca-bundle\") pod \"e86527c2-480f-4508-be25-9b2eab1f4274\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.975149 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data\") pod \"e86527c2-480f-4508-be25-9b2eab1f4274\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.975187 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kttbs\" (UniqueName: \"kubernetes.io/projected/e86527c2-480f-4508-be25-9b2eab1f4274-kube-api-access-kttbs\") pod \"e86527c2-480f-4508-be25-9b2eab1f4274\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.975267 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e86527c2-480f-4508-be25-9b2eab1f4274-logs\") pod \"e86527c2-480f-4508-be25-9b2eab1f4274\" (UID: \"e86527c2-480f-4508-be25-9b2eab1f4274\") " Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.988839 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e86527c2-480f-4508-be25-9b2eab1f4274" (UID: "e86527c2-480f-4508-be25-9b2eab1f4274"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:59 crc kubenswrapper[5034]: I0105 22:15:59.991440 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e86527c2-480f-4508-be25-9b2eab1f4274-logs" (OuterVolumeSpecName: "logs") pod "e86527c2-480f-4508-be25-9b2eab1f4274" (UID: "e86527c2-480f-4508-be25-9b2eab1f4274"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.012680 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e86527c2-480f-4508-be25-9b2eab1f4274-kube-api-access-kttbs" (OuterVolumeSpecName: "kube-api-access-kttbs") pod "e86527c2-480f-4508-be25-9b2eab1f4274" (UID: "e86527c2-480f-4508-be25-9b2eab1f4274"). InnerVolumeSpecName "kube-api-access-kttbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.060053 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e86527c2-480f-4508-be25-9b2eab1f4274" (UID: "e86527c2-480f-4508-be25-9b2eab1f4274"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.081802 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-combined-ca-bundle\") pod \"a4a7982e-25f8-4f97-9db5-1c828835ae84\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.082263 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-config-data\") pod \"a4a7982e-25f8-4f97-9db5-1c828835ae84\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.082453 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8t2p\" (UniqueName: \"kubernetes.io/projected/a4a7982e-25f8-4f97-9db5-1c828835ae84-kube-api-access-c8t2p\") pod \"a4a7982e-25f8-4f97-9db5-1c828835ae84\" (UID: \"a4a7982e-25f8-4f97-9db5-1c828835ae84\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.083639 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.096486 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kttbs\" (UniqueName: \"kubernetes.io/projected/e86527c2-480f-4508-be25-9b2eab1f4274-kube-api-access-kttbs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.096596 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e86527c2-480f-4508-be25-9b2eab1f4274-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.096664 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.109668 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.123598 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4a7982e-25f8-4f97-9db5-1c828835ae84" (UID: "a4a7982e-25f8-4f97-9db5-1c828835ae84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.151311 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a7982e-25f8-4f97-9db5-1c828835ae84-kube-api-access-c8t2p" (OuterVolumeSpecName: "kube-api-access-c8t2p") pod "a4a7982e-25f8-4f97-9db5-1c828835ae84" (UID: "a4a7982e-25f8-4f97-9db5-1c828835ae84"). InnerVolumeSpecName "kube-api-access-c8t2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.177692 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-config-data" (OuterVolumeSpecName: "config-data") pod "a4a7982e-25f8-4f97-9db5-1c828835ae84" (UID: "a4a7982e-25f8-4f97-9db5-1c828835ae84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.185695 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.192273 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8420-account-create-update-qgcgf" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.198072 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8t2p\" (UniqueName: \"kubernetes.io/projected/a4a7982e-25f8-4f97-9db5-1c828835ae84-kube-api-access-c8t2p\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.198113 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.198124 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4a7982e-25f8-4f97-9db5-1c828835ae84-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: E0105 22:16:00.216646 5034 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c2c8ddc_f82a_4cca_8a84_90c5713754cf.slice/crio-conmon-c73e4953491ec9f47f29267b9f26809a0789ba4fdcd8a63a9120ac77e00f3874.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb0c349d_e74e_49eb_ba86_8a435d15ba66.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c2c8ddc_f82a_4cca_8a84_90c5713754cf.slice/crio-c73e4953491ec9f47f29267b9f26809a0789ba4fdcd8a63a9120ac77e00f3874.scope\": RecentStats: unable to find data in memory cache]" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.220556 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.223140 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" event={"ID":"352134f8-9e6a-487a-8afd-b70ab941cd17","Type":"ContainerDied","Data":"e2a5774ce1513037a5b2e7ac0f9ca0d32135941d26e81fb7876227a273192a91"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.223200 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dbbc-account-create-update-pp9cm" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.227423 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data" (OuterVolumeSpecName: "config-data") pod "e86527c2-480f-4508-be25-9b2eab1f4274" (UID: "e86527c2-480f-4508-be25-9b2eab1f4274"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.242570 5034 generic.go:334] "Generic (PLEG): container finished" podID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" containerID="c73e4953491ec9f47f29267b9f26809a0789ba4fdcd8a63a9120ac77e00f3874" exitCode=0 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.242912 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c2c8ddc-f82a-4cca-8a84-90c5713754cf","Type":"ContainerDied","Data":"c73e4953491ec9f47f29267b9f26809a0789ba4fdcd8a63a9120ac77e00f3874"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.250504 5034 generic.go:334] "Generic (PLEG): container finished" podID="d11dc2db-1f91-4ec6-9efd-333fcafface4" containerID="0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2" exitCode=0 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.251441 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-664f75f5b6-lz6hv" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.252106 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-664f75f5b6-lz6hv" event={"ID":"d11dc2db-1f91-4ec6-9efd-333fcafface4","Type":"ContainerDied","Data":"0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.252203 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-664f75f5b6-lz6hv" event={"ID":"d11dc2db-1f91-4ec6-9efd-333fcafface4","Type":"ContainerDied","Data":"e96c0b944bbe9235e4cb293e573ef000f7a81d3face410d936157396f7fcb4ba"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.252229 5034 scope.go:117] "RemoveContainer" containerID="0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.257362 5034 generic.go:334] "Generic (PLEG): container finished" podID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerID="1b21cd994fe20b3b5d50504e11ca0a8e875e6198715a73ad6b85c6ab13ae3f03" exitCode=0 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.257412 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11669bb7-2e25-4817-a4e8-a487ea5b90cb","Type":"ContainerDied","Data":"1b21cd994fe20b3b5d50504e11ca0a8e875e6198715a73ad6b85c6ab13ae3f03"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.270953 5034 generic.go:334] "Generic (PLEG): container finished" podID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerID="6800fbb56148e87618fda2df370bdf264e113c0b015622bf898f8261f8fafda1" exitCode=0 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.271095 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9b4698bd-747dm" event={"ID":"6c0c6abd-9d45-4022-aca3-5e63949d1aab","Type":"ContainerDied","Data":"6800fbb56148e87618fda2df370bdf264e113c0b015622bf898f8261f8fafda1"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.277447 5034 generic.go:334] "Generic (PLEG): container finished" podID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerID="ac34648fd83cfc034ae26bdfef9f0975cae1c33e8e010cf1a87f4c7182f7691b" exitCode=0 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.277480 5034 generic.go:334] "Generic (PLEG): container finished" podID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerID="f1b3c35f63294d582fd4b25a3dbec8507d92d98c35739809ed98988da8b876c1" exitCode=0 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.277530 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6576bc4c77-zzdbj" event={"ID":"983e4ee8-36de-4b90-b18b-eed4db804a3d","Type":"ContainerDied","Data":"ac34648fd83cfc034ae26bdfef9f0975cae1c33e8e010cf1a87f4c7182f7691b"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.277560 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6576bc4c77-zzdbj" event={"ID":"983e4ee8-36de-4b90-b18b-eed4db804a3d","Type":"ContainerDied","Data":"f1b3c35f63294d582fd4b25a3dbec8507d92d98c35739809ed98988da8b876c1"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.279856 5034 generic.go:334] "Generic (PLEG): container finished" podID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" containerID="8418957d5569499030e540d503f2cad0da02c2f42f6a83cfde9bd58408288a6e" exitCode=1 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.279919 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jbklw" event={"ID":"0623db6b-2e6a-4739-8c7f-ec9a98b51d93","Type":"ContainerDied","Data":"8418957d5569499030e540d503f2cad0da02c2f42f6a83cfde9bd58408288a6e"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.280641 5034 scope.go:117] "RemoveContainer" containerID="8418957d5569499030e540d503f2cad0da02c2f42f6a83cfde9bd58408288a6e" Jan 05 22:16:00 crc kubenswrapper[5034]: E0105 22:16:00.280966 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-jbklw_openstack(0623db6b-2e6a-4739-8c7f-ec9a98b51d93)\"" pod="openstack/root-account-create-update-jbklw" podUID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.284279 5034 generic.go:334] "Generic (PLEG): container finished" podID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerID="d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47" exitCode=0 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.284597 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54834f39-7569-4cf3-812d-2c6d1bd161b8","Type":"ContainerDied","Data":"d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.284635 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54834f39-7569-4cf3-812d-2c6d1bd161b8","Type":"ContainerDied","Data":"7f0345b7c44eec7a1a727560981fb89ee6b4ce7d95cc274cae05ee7604d64312"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.284833 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.287278 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8420-account-create-update-qgcgf" event={"ID":"8d62b8ca-f71a-424f-bbee-cc709c382ba9","Type":"ContainerDied","Data":"4ddb9223afeef6f557040dbbc0b598db3f06433c21c9205166dcf6943df21e57"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.287384 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8420-account-create-update-qgcgf" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.289832 5034 scope.go:117] "RemoveContainer" containerID="08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.291106 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" event={"ID":"e86527c2-480f-4508-be25-9b2eab1f4274","Type":"ContainerDied","Data":"67fe16ebfb09e58dc945e192dae33c37d2ed1ee4a74813851295f669c0b65b11"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.291122 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d6dccdcd5-gglfm" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.298622 5034 generic.go:334] "Generic (PLEG): container finished" podID="a4a7982e-25f8-4f97-9db5-1c828835ae84" containerID="bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a" exitCode=0 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.299208 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.299798 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a4a7982e-25f8-4f97-9db5-1c828835ae84","Type":"ContainerDied","Data":"bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.299832 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a4a7982e-25f8-4f97-9db5-1c828835ae84","Type":"ContainerDied","Data":"fdb5efb4b03dad314ade9b05102ed3489f7acd9f9959529f443cffed875fc576"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.300591 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11dc2db-1f91-4ec6-9efd-333fcafface4-logs\") pod \"d11dc2db-1f91-4ec6-9efd-333fcafface4\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.300643 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-scripts\") pod \"d11dc2db-1f91-4ec6-9efd-333fcafface4\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.301754 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9x6s\" (UniqueName: \"kubernetes.io/projected/8d62b8ca-f71a-424f-bbee-cc709c382ba9-kube-api-access-s9x6s\") pod \"8d62b8ca-f71a-424f-bbee-cc709c382ba9\" (UID: \"8d62b8ca-f71a-424f-bbee-cc709c382ba9\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.301811 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjr82\" (UniqueName: \"kubernetes.io/projected/d11dc2db-1f91-4ec6-9efd-333fcafface4-kube-api-access-cjr82\") pod \"d11dc2db-1f91-4ec6-9efd-333fcafface4\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.301839 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xmsj\" (UniqueName: \"kubernetes.io/projected/352134f8-9e6a-487a-8afd-b70ab941cd17-kube-api-access-7xmsj\") pod \"352134f8-9e6a-487a-8afd-b70ab941cd17\" (UID: \"352134f8-9e6a-487a-8afd-b70ab941cd17\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.301881 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d62b8ca-f71a-424f-bbee-cc709c382ba9-operator-scripts\") pod \"8d62b8ca-f71a-424f-bbee-cc709c382ba9\" (UID: \"8d62b8ca-f71a-424f-bbee-cc709c382ba9\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.302018 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11dc2db-1f91-4ec6-9efd-333fcafface4-logs" (OuterVolumeSpecName: "logs") pod "d11dc2db-1f91-4ec6-9efd-333fcafface4" (UID: "d11dc2db-1f91-4ec6-9efd-333fcafface4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.302599 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-internal-tls-certs\") pod \"d11dc2db-1f91-4ec6-9efd-333fcafface4\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.302631 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-public-tls-certs\") pod \"d11dc2db-1f91-4ec6-9efd-333fcafface4\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.302677 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-config-data\") pod \"d11dc2db-1f91-4ec6-9efd-333fcafface4\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.302746 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-combined-ca-bundle\") pod \"d11dc2db-1f91-4ec6-9efd-333fcafface4\" (UID: \"d11dc2db-1f91-4ec6-9efd-333fcafface4\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.302856 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/352134f8-9e6a-487a-8afd-b70ab941cd17-operator-scripts\") pod \"352134f8-9e6a-487a-8afd-b70ab941cd17\" (UID: \"352134f8-9e6a-487a-8afd-b70ab941cd17\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.304041 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d62b8ca-f71a-424f-bbee-cc709c382ba9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d62b8ca-f71a-424f-bbee-cc709c382ba9" (UID: "8d62b8ca-f71a-424f-bbee-cc709c382ba9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.305064 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d62b8ca-f71a-424f-bbee-cc709c382ba9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.305188 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86527c2-480f-4508-be25-9b2eab1f4274-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.305200 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11dc2db-1f91-4ec6-9efd-333fcafface4-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.305530 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/352134f8-9e6a-487a-8afd-b70ab941cd17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "352134f8-9e6a-487a-8afd-b70ab941cd17" (UID: "352134f8-9e6a-487a-8afd-b70ab941cd17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.307702 5034 generic.go:334] "Generic (PLEG): container finished" podID="eaa3282d-5044-490b-be8e-5b721c49d338" containerID="e71299f8473ea6e97ab3f521671935fa9ae99d0a935b71e63ce2ceb108169b56" exitCode=0 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.307840 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eaa3282d-5044-490b-be8e-5b721c49d338","Type":"ContainerDied","Data":"e71299f8473ea6e97ab3f521671935fa9ae99d0a935b71e63ce2ceb108169b56"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.308415 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11dc2db-1f91-4ec6-9efd-333fcafface4-kube-api-access-cjr82" (OuterVolumeSpecName: "kube-api-access-cjr82") pod "d11dc2db-1f91-4ec6-9efd-333fcafface4" (UID: "d11dc2db-1f91-4ec6-9efd-333fcafface4"). InnerVolumeSpecName "kube-api-access-cjr82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.308684 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352134f8-9e6a-487a-8afd-b70ab941cd17-kube-api-access-7xmsj" (OuterVolumeSpecName: "kube-api-access-7xmsj") pod "352134f8-9e6a-487a-8afd-b70ab941cd17" (UID: "352134f8-9e6a-487a-8afd-b70ab941cd17"). InnerVolumeSpecName "kube-api-access-7xmsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.316377 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bb0c349d-e74e-49eb-ba86-8a435d15ba66","Type":"ContainerDied","Data":"905d0940dbba86815450e0a93ce49dc3d120d14cad9d7610c63e16aa8f3dae83"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.316506 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.323110 5034 scope.go:117] "RemoveContainer" containerID="0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2" Jan 05 22:16:00 crc kubenswrapper[5034]: E0105 22:16:00.323797 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2\": container with ID starting with 0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2 not found: ID does not exist" containerID="0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.327501 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-scripts" (OuterVolumeSpecName: "scripts") pod "d11dc2db-1f91-4ec6-9efd-333fcafface4" (UID: "d11dc2db-1f91-4ec6-9efd-333fcafface4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.323841 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2"} err="failed to get container status \"0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2\": rpc error: code = NotFound desc = could not find container \"0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2\": container with ID starting with 0965b30a14bbab15aac7b7df37352dd5f1160c0d7ebbc21a429988bff411e4b2 not found: ID does not exist" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.328185 5034 scope.go:117] "RemoveContainer" containerID="08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652" Jan 05 22:16:00 crc kubenswrapper[5034]: E0105 22:16:00.329054 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652\": container with ID starting with 08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652 not found: ID does not exist" containerID="08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.329138 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652"} err="failed to get container status \"08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652\": rpc error: code = NotFound desc = could not find container \"08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652\": container with ID starting with 08b272924183655ccd9a1ed2488de278dd2c39e0ee3e49574c8acb6608063652 not found: ID does not exist" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.329167 5034 scope.go:117] "RemoveContainer" containerID="bee3983507957a52c6825ce5a11b7300dfc8f910469ec7d3ffd94dc08225c1ca" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.329208 5034 generic.go:334] "Generic (PLEG): container finished" podID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" containerID="d0c499f0a927479b340ab820f58c0578043492c63baf9a7836426b0f832cdd3a" exitCode=0 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.329238 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc1f97e4-be98-4c2a-b819-17d9c3b0be51","Type":"ContainerDied","Data":"d0c499f0a927479b340ab820f58c0578043492c63baf9a7836426b0f832cdd3a"} Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.351683 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6d6dccdcd5-gglfm"] Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.357025 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6d6dccdcd5-gglfm"] Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.363791 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d62b8ca-f71a-424f-bbee-cc709c382ba9-kube-api-access-s9x6s" (OuterVolumeSpecName: "kube-api-access-s9x6s") pod "8d62b8ca-f71a-424f-bbee-cc709c382ba9" (UID: "8d62b8ca-f71a-424f-bbee-cc709c382ba9"). InnerVolumeSpecName "kube-api-access-s9x6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.367483 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.374047 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.405225 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.405980 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-internal-tls-certs\") pod \"54834f39-7569-4cf3-812d-2c6d1bd161b8\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.406141 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-scripts\") pod \"54834f39-7569-4cf3-812d-2c6d1bd161b8\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.406221 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-public-tls-certs\") pod \"54834f39-7569-4cf3-812d-2c6d1bd161b8\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.406308 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data-custom\") pod \"54834f39-7569-4cf3-812d-2c6d1bd161b8\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.406365 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54834f39-7569-4cf3-812d-2c6d1bd161b8-etc-machine-id\") pod \"54834f39-7569-4cf3-812d-2c6d1bd161b8\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.406398 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54834f39-7569-4cf3-812d-2c6d1bd161b8-logs\") pod \"54834f39-7569-4cf3-812d-2c6d1bd161b8\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.406446 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x84k2\" (UniqueName: \"kubernetes.io/projected/54834f39-7569-4cf3-812d-2c6d1bd161b8-kube-api-access-x84k2\") pod \"54834f39-7569-4cf3-812d-2c6d1bd161b8\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.406475 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-combined-ca-bundle\") pod \"54834f39-7569-4cf3-812d-2c6d1bd161b8\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.406562 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data\") pod \"54834f39-7569-4cf3-812d-2c6d1bd161b8\" (UID: \"54834f39-7569-4cf3-812d-2c6d1bd161b8\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.407115 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/352134f8-9e6a-487a-8afd-b70ab941cd17-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.407133 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.407143 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9x6s\" (UniqueName: \"kubernetes.io/projected/8d62b8ca-f71a-424f-bbee-cc709c382ba9-kube-api-access-s9x6s\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.407153 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjr82\" (UniqueName: \"kubernetes.io/projected/d11dc2db-1f91-4ec6-9efd-333fcafface4-kube-api-access-cjr82\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.407164 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xmsj\" (UniqueName: \"kubernetes.io/projected/352134f8-9e6a-487a-8afd-b70ab941cd17-kube-api-access-7xmsj\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.415147 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.420707 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54834f39-7569-4cf3-812d-2c6d1bd161b8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "54834f39-7569-4cf3-812d-2c6d1bd161b8" (UID: "54834f39-7569-4cf3-812d-2c6d1bd161b8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.421066 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54834f39-7569-4cf3-812d-2c6d1bd161b8-logs" (OuterVolumeSpecName: "logs") pod "54834f39-7569-4cf3-812d-2c6d1bd161b8" (UID: "54834f39-7569-4cf3-812d-2c6d1bd161b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.435940 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-scripts" (OuterVolumeSpecName: "scripts") pod "54834f39-7569-4cf3-812d-2c6d1bd161b8" (UID: "54834f39-7569-4cf3-812d-2c6d1bd161b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.481510 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54834f39-7569-4cf3-812d-2c6d1bd161b8" (UID: "54834f39-7569-4cf3-812d-2c6d1bd161b8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.481561 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54834f39-7569-4cf3-812d-2c6d1bd161b8-kube-api-access-x84k2" (OuterVolumeSpecName: "kube-api-access-x84k2") pod "54834f39-7569-4cf3-812d-2c6d1bd161b8" (UID: "54834f39-7569-4cf3-812d-2c6d1bd161b8"). InnerVolumeSpecName "kube-api-access-x84k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.495963 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-config-data" (OuterVolumeSpecName: "config-data") pod "d11dc2db-1f91-4ec6-9efd-333fcafface4" (UID: "d11dc2db-1f91-4ec6-9efd-333fcafface4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.509063 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.509381 5034 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54834f39-7569-4cf3-812d-2c6d1bd161b8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.509438 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54834f39-7569-4cf3-812d-2c6d1bd161b8-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.509493 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.509544 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x84k2\" (UniqueName: \"kubernetes.io/projected/54834f39-7569-4cf3-812d-2c6d1bd161b8-kube-api-access-x84k2\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.509628 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.536460 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54834f39-7569-4cf3-812d-2c6d1bd161b8" (UID: "54834f39-7569-4cf3-812d-2c6d1bd161b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.575330 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d11dc2db-1f91-4ec6-9efd-333fcafface4" (UID: "d11dc2db-1f91-4ec6-9efd-333fcafface4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.593522 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data" (OuterVolumeSpecName: "config-data") pod "54834f39-7569-4cf3-812d-2c6d1bd161b8" (UID: "54834f39-7569-4cf3-812d-2c6d1bd161b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.604654 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "54834f39-7569-4cf3-812d-2c6d1bd161b8" (UID: "54834f39-7569-4cf3-812d-2c6d1bd161b8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.617253 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "54834f39-7569-4cf3-812d-2c6d1bd161b8" (UID: "54834f39-7569-4cf3-812d-2c6d1bd161b8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.630548 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.631377 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.631462 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.631544 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.631647 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54834f39-7569-4cf3-812d-2c6d1bd161b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.632365 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d11dc2db-1f91-4ec6-9efd-333fcafface4" (UID: "d11dc2db-1f91-4ec6-9efd-333fcafface4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.641001 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d11dc2db-1f91-4ec6-9efd-333fcafface4" (UID: "d11dc2db-1f91-4ec6-9efd-333fcafface4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.641266 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.733779 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.733805 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11dc2db-1f91-4ec6-9efd-333fcafface4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.743800 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.753568 5034 scope.go:117] "RemoveContainer" containerID="d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.795889 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.818150 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8420-account-create-update-qgcgf"] Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.830102 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.830424 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8420-account-create-update-qgcgf"] Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842001 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-log-httpd\") pod \"983e4ee8-36de-4b90-b18b-eed4db804a3d\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842071 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-public-tls-certs\") pod \"983e4ee8-36de-4b90-b18b-eed4db804a3d\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842174 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnwlp\" (UniqueName: \"kubernetes.io/projected/11669bb7-2e25-4817-a4e8-a487ea5b90cb-kube-api-access-vnwlp\") pod \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842210 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-internal-tls-certs\") pod \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842238 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-run-httpd\") pod \"983e4ee8-36de-4b90-b18b-eed4db804a3d\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842275 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data-custom\") pod \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842304 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-config-data\") pod \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842335 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11669bb7-2e25-4817-a4e8-a487ea5b90cb-logs\") pod \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842371 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-combined-ca-bundle\") pod \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842410 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rlsm\" (UniqueName: \"kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-kube-api-access-8rlsm\") pod \"983e4ee8-36de-4b90-b18b-eed4db804a3d\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842481 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-etc-swift\") pod \"983e4ee8-36de-4b90-b18b-eed4db804a3d\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842519 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-combined-ca-bundle\") pod \"983e4ee8-36de-4b90-b18b-eed4db804a3d\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842546 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-nova-metadata-tls-certs\") pod \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\" (UID: \"11669bb7-2e25-4817-a4e8-a487ea5b90cb\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842579 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-public-tls-certs\") pod \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842604 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0c6abd-9d45-4022-aca3-5e63949d1aab-logs\") pod \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842659 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-combined-ca-bundle\") pod \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842719 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data\") pod \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842752 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-internal-tls-certs\") pod \"983e4ee8-36de-4b90-b18b-eed4db804a3d\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842772 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jz5t\" (UniqueName: \"kubernetes.io/projected/6c0c6abd-9d45-4022-aca3-5e63949d1aab-kube-api-access-9jz5t\") pod \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\" (UID: \"6c0c6abd-9d45-4022-aca3-5e63949d1aab\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.842801 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-config-data\") pod \"983e4ee8-36de-4b90-b18b-eed4db804a3d\" (UID: \"983e4ee8-36de-4b90-b18b-eed4db804a3d\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.845606 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c0c6abd-9d45-4022-aca3-5e63949d1aab-logs" (OuterVolumeSpecName: "logs") pod "6c0c6abd-9d45-4022-aca3-5e63949d1aab" (UID: "6c0c6abd-9d45-4022-aca3-5e63949d1aab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.846064 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.863747 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "983e4ee8-36de-4b90-b18b-eed4db804a3d" (UID: "983e4ee8-36de-4b90-b18b-eed4db804a3d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.865207 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "983e4ee8-36de-4b90-b18b-eed4db804a3d" (UID: "983e4ee8-36de-4b90-b18b-eed4db804a3d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.868293 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.868606 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="ceilometer-central-agent" containerID="cri-o://f1d141763da55b0e46082e6afd0859a1080cd136929e067948b616a2530eac31" gracePeriod=30 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.868756 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="proxy-httpd" containerID="cri-o://9e1dc00f3e2493bdf5a7760277688af58c81bbb233be802e5fd2fc891a6f89a3" gracePeriod=30 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.868802 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="sg-core" containerID="cri-o://3137688b20d8b842cd8f9e85cf05a9851f287206fadca80a9fae4c2672d55f96" gracePeriod=30 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.868836 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="ceilometer-notification-agent" containerID="cri-o://1d7b03a04a230b552aaf243bbc2885e5f698b8f260c1a4b1505ee39ac4fe636a" gracePeriod=30 Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.875736 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11669bb7-2e25-4817-a4e8-a487ea5b90cb-logs" (OuterVolumeSpecName: "logs") pod "11669bb7-2e25-4817-a4e8-a487ea5b90cb" (UID: "11669bb7-2e25-4817-a4e8-a487ea5b90cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.906726 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c0c6abd-9d45-4022-aca3-5e63949d1aab" (UID: "6c0c6abd-9d45-4022-aca3-5e63949d1aab"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.910934 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c0c6abd-9d45-4022-aca3-5e63949d1aab" (UID: "6c0c6abd-9d45-4022-aca3-5e63949d1aab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.918690 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0c6abd-9d45-4022-aca3-5e63949d1aab-kube-api-access-9jz5t" (OuterVolumeSpecName: "kube-api-access-9jz5t") pod "6c0c6abd-9d45-4022-aca3-5e63949d1aab" (UID: "6c0c6abd-9d45-4022-aca3-5e63949d1aab"). InnerVolumeSpecName "kube-api-access-9jz5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.938390 5034 scope.go:117] "RemoveContainer" containerID="8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.945466 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-config-data\") pod \"eaa3282d-5044-490b-be8e-5b721c49d338\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.945517 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc7lj\" (UniqueName: \"kubernetes.io/projected/eaa3282d-5044-490b-be8e-5b721c49d338-kube-api-access-jc7lj\") pod \"eaa3282d-5044-490b-be8e-5b721c49d338\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.945556 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-internal-tls-certs\") pod \"eaa3282d-5044-490b-be8e-5b721c49d338\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.945682 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-public-tls-certs\") pod \"eaa3282d-5044-490b-be8e-5b721c49d338\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.945749 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa3282d-5044-490b-be8e-5b721c49d338-logs\") pod \"eaa3282d-5044-490b-be8e-5b721c49d338\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.945800 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-combined-ca-bundle\") pod \"eaa3282d-5044-490b-be8e-5b721c49d338\" (UID: \"eaa3282d-5044-490b-be8e-5b721c49d338\") " Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.946187 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.946200 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.946210 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11669bb7-2e25-4817-a4e8-a487ea5b90cb-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.946219 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0c6abd-9d45-4022-aca3-5e63949d1aab-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.946227 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.946237 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jz5t\" (UniqueName: \"kubernetes.io/projected/6c0c6abd-9d45-4022-aca3-5e63949d1aab-kube-api-access-9jz5t\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.946246 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/983e4ee8-36de-4b90-b18b-eed4db804a3d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.959859 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "983e4ee8-36de-4b90-b18b-eed4db804a3d" (UID: "983e4ee8-36de-4b90-b18b-eed4db804a3d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.960181 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa3282d-5044-490b-be8e-5b721c49d338-logs" (OuterVolumeSpecName: "logs") pod "eaa3282d-5044-490b-be8e-5b721c49d338" (UID: "eaa3282d-5044-490b-be8e-5b721c49d338"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.971791 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:16:00 crc kubenswrapper[5034]: I0105 22:16:00.981095 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11669bb7-2e25-4817-a4e8-a487ea5b90cb-kube-api-access-vnwlp" (OuterVolumeSpecName: "kube-api-access-vnwlp") pod "11669bb7-2e25-4817-a4e8-a487ea5b90cb" (UID: "11669bb7-2e25-4817-a4e8-a487ea5b90cb"). InnerVolumeSpecName "kube-api-access-vnwlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.059941 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-httpd-run\") pod \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.060054 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.060190 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-scripts\") pod \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.060259 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-combined-ca-bundle\") pod \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.060295 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-public-tls-certs\") pod \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.060316 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-logs\") pod \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.060335 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-config-data\") pod \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.060352 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppbdr\" (UniqueName: \"kubernetes.io/projected/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-kube-api-access-ppbdr\") pod \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\" (UID: \"4c2c8ddc-f82a-4cca-8a84-90c5713754cf\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.060595 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnwlp\" (UniqueName: \"kubernetes.io/projected/11669bb7-2e25-4817-a4e8-a487ea5b90cb-kube-api-access-vnwlp\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.060612 5034 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.060620 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa3282d-5044-490b-be8e-5b721c49d338-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.062689 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-logs" (OuterVolumeSpecName: "logs") pod "4c2c8ddc-f82a-4cca-8a84-90c5713754cf" (UID: "4c2c8ddc-f82a-4cca-8a84-90c5713754cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.075246 5034 scope.go:117] "RemoveContainer" containerID="d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.076625 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4c2c8ddc-f82a-4cca-8a84-90c5713754cf" (UID: "4c2c8ddc-f82a-4cca-8a84-90c5713754cf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.086493 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47\": container with ID starting with d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47 not found: ID does not exist" containerID="d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.086595 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47"} err="failed to get container status \"d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47\": rpc error: code = NotFound desc = could not find container \"d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47\": container with ID starting with d531eba9842358af0bf521bb08de2d9fd5ad0b232d0e1bff9314b89f0e8d9c47 not found: ID does not exist" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.086638 5034 scope.go:117] "RemoveContainer" containerID="8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.091503 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c\": container with ID starting with 8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c not found: ID does not exist" containerID="8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.091565 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c"} err="failed to get container status \"8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c\": rpc error: code = NotFound desc = could not find container \"8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c\": container with ID starting with 8d3f6ae72abea7c63b73076c0e5e98b64a833becbe3346a6595816f901a31d8c not found: ID does not exist" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.091597 5034 scope.go:117] "RemoveContainer" containerID="d6abfd2461105e8e1eea8e2d6a6889e3b27bf573f5c2e81d53d24675eaa17698" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.108930 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-kube-api-access-ppbdr" (OuterVolumeSpecName: "kube-api-access-ppbdr") pod "4c2c8ddc-f82a-4cca-8a84-90c5713754cf" (UID: "4c2c8ddc-f82a-4cca-8a84-90c5713754cf"). InnerVolumeSpecName "kube-api-access-ppbdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.111519 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-pp9cm"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.115214 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-kube-api-access-8rlsm" (OuterVolumeSpecName: "kube-api-access-8rlsm") pod "983e4ee8-36de-4b90-b18b-eed4db804a3d" (UID: "983e4ee8-36de-4b90-b18b-eed4db804a3d"). InnerVolumeSpecName "kube-api-access-8rlsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.116526 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-scripts" (OuterVolumeSpecName: "scripts") pod "4c2c8ddc-f82a-4cca-8a84-90c5713754cf" (UID: "4c2c8ddc-f82a-4cca-8a84-90c5713754cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.130933 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.131185 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "4c2c8ddc-f82a-4cca-8a84-90c5713754cf" (UID: "4c2c8ddc-f82a-4cca-8a84-90c5713754cf"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.132598 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-dbbc-account-create-update-pp9cm"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.135424 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa3282d-5044-490b-be8e-5b721c49d338-kube-api-access-jc7lj" (OuterVolumeSpecName: "kube-api-access-jc7lj") pod "eaa3282d-5044-490b-be8e-5b721c49d338" (UID: "eaa3282d-5044-490b-be8e-5b721c49d338"). InnerVolumeSpecName "kube-api-access-jc7lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.164458 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.164493 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.164502 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppbdr\" (UniqueName: \"kubernetes.io/projected/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-kube-api-access-ppbdr\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.164511 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.164523 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc7lj\" (UniqueName: \"kubernetes.io/projected/eaa3282d-5044-490b-be8e-5b721c49d338-kube-api-access-jc7lj\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.164552 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.164563 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rlsm\" (UniqueName: \"kubernetes.io/projected/983e4ee8-36de-4b90-b18b-eed4db804a3d-kube-api-access-8rlsm\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.177312 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaa3282d-5044-490b-be8e-5b721c49d338" (UID: "eaa3282d-5044-490b-be8e-5b721c49d338"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.187961 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.188354 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5c91a26e-489c-40b8-bf4b-b60f65431df0" containerName="kube-state-metrics" containerID="cri-o://993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8" gracePeriod=30 Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.189097 5034 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.189202 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data podName:94526d3f-1e21-4eef-abb7-5cd05bfb1670 nodeName:}" failed. No retries permitted until 2026-01-05 22:16:09.18917722 +0000 UTC m=+1461.561176659 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data") pod "rabbitmq-server-0" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670") : configmap "rabbitmq-config-data" not found Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.191680 5034 scope.go:117] "RemoveContainer" containerID="ed46a153785fa5cc88884b8676fd407f393552edec4eb2fef58f1e35704d646a" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.228353 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-config-data" (OuterVolumeSpecName: "config-data") pod "eaa3282d-5044-490b-be8e-5b721c49d338" (UID: "eaa3282d-5044-490b-be8e-5b721c49d338"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.228486 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11669bb7-2e25-4817-a4e8-a487ea5b90cb" (UID: "11669bb7-2e25-4817-a4e8-a487ea5b90cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.268763 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-httpd-run\") pod \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.268972 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-config-data\") pod \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.269012 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-logs\") pod \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.270207 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.270275 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvs6k\" (UniqueName: \"kubernetes.io/projected/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-kube-api-access-vvs6k\") pod \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.270376 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-combined-ca-bundle\") pod \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.270469 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-internal-tls-certs\") pod \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.270589 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-scripts\") pod \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.272008 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.272041 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.272058 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.281044 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dc1f97e4-be98-4c2a-b819-17d9c3b0be51" (UID: "dc1f97e4-be98-4c2a-b819-17d9c3b0be51"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.282431 5034 scope.go:117] "RemoveContainer" containerID="bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.282444 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-logs" (OuterVolumeSpecName: "logs") pod "dc1f97e4-be98-4c2a-b819-17d9c3b0be51" (UID: "dc1f97e4-be98-4c2a-b819-17d9c3b0be51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.309118 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-scripts" (OuterVolumeSpecName: "scripts") pod "dc1f97e4-be98-4c2a-b819-17d9c3b0be51" (UID: "dc1f97e4-be98-4c2a-b819-17d9c3b0be51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.309323 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "dc1f97e4-be98-4c2a-b819-17d9c3b0be51" (UID: "dc1f97e4-be98-4c2a-b819-17d9c3b0be51"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.310196 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "983e4ee8-36de-4b90-b18b-eed4db804a3d" (UID: "983e4ee8-36de-4b90-b18b-eed4db804a3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.321303 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.321641 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="44fc54fc-2187-4b43-8e20-e8c84b8f54d3" containerName="memcached" containerID="cri-o://663dddc4729d62810041b4ac300dd6293f55ca190f90f1a3e6f6b67eea444427" gracePeriod=30 Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.331531 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-kube-api-access-vvs6k" (OuterVolumeSpecName: "kube-api-access-vvs6k") pod "dc1f97e4-be98-4c2a-b819-17d9c3b0be51" (UID: "dc1f97e4-be98-4c2a-b819-17d9c3b0be51"). InnerVolumeSpecName "kube-api-access-vvs6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.332067 5034 scope.go:117] "RemoveContainer" containerID="bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.343287 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data" (OuterVolumeSpecName: "config-data") pod "6c0c6abd-9d45-4022-aca3-5e63949d1aab" (UID: "6c0c6abd-9d45-4022-aca3-5e63949d1aab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.345297 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a\": container with ID starting with bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a not found: ID does not exist" containerID="bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.345385 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a"} err="failed to get container status \"bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a\": rpc error: code = NotFound desc = could not find container \"bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a\": container with ID starting with bebd30c79e59cbf292715d9cd23fd60c584f0738455fe535146df6a405843f8a not found: ID does not exist" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.345425 5034 scope.go:117] "RemoveContainer" containerID="3960d5a24203a890e55b4c5a09107afdae62bb85f6aa67fa283d78bfd0a56edd" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.359241 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4d3f-account-create-update-q7fvm"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.367316 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eaa3282d-5044-490b-be8e-5b721c49d338" (UID: "eaa3282d-5044-490b-be8e-5b721c49d338"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.379432 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.379820 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-config-data" (OuterVolumeSpecName: "config-data") pod "11669bb7-2e25-4817-a4e8-a487ea5b90cb" (UID: "11669bb7-2e25-4817-a4e8-a487ea5b90cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.379886 5034 scope.go:117] "RemoveContainer" containerID="fe2064fa3b2b7e941e1493c9cb05377a7cb7976cfac4b8c759f921b2f44f5d59" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.382784 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.382827 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-logs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.382840 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.382867 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.382879 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvs6k\" (UniqueName: \"kubernetes.io/projected/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-kube-api-access-vvs6k\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.382895 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.382906 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.382918 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.382928 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.382936 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.392392 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c2c8ddc-f82a-4cca-8a84-90c5713754cf","Type":"ContainerDied","Data":"8f8e9a9d30451a023db9dc005c255f85630941006c94ae091f487b3412bebba6"} Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.392524 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.396850 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11669bb7-2e25-4817-a4e8-a487ea5b90cb","Type":"ContainerDied","Data":"6b5f42e9b038055bad9d0746b396d80efc4e409a77f82361bd85234c134e6520"} Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.396975 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.410029 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c2c8ddc-f82a-4cca-8a84-90c5713754cf" (UID: "4c2c8ddc-f82a-4cca-8a84-90c5713754cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.412830 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eaa3282d-5044-490b-be8e-5b721c49d338" (UID: "eaa3282d-5044-490b-be8e-5b721c49d338"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.414906 5034 generic.go:334] "Generic (PLEG): container finished" podID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerID="3137688b20d8b842cd8f9e85cf05a9851f287206fadca80a9fae4c2672d55f96" exitCode=2 Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.414982 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71dab1f9-0430-4516-8eed-265cfd0c5be9","Type":"ContainerDied","Data":"3137688b20d8b842cd8f9e85cf05a9851f287206fadca80a9fae4c2672d55f96"} Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.416985 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc1f97e4-be98-4c2a-b819-17d9c3b0be51","Type":"ContainerDied","Data":"259ed9625b201feda963d9cd1abfab443233d945e13e03b12cc13d939ccc66d3"} Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.417119 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.418007 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "983e4ee8-36de-4b90-b18b-eed4db804a3d" (UID: "983e4ee8-36de-4b90-b18b-eed4db804a3d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.420758 5034 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-jbklw" secret="" err="secret \"galera-openstack-dockercfg-j5gxt\" not found" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.420921 5034 scope.go:117] "RemoveContainer" containerID="8418957d5569499030e540d503f2cad0da02c2f42f6a83cfde9bd58408288a6e" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.421325 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-jbklw_openstack(0623db6b-2e6a-4739-8c7f-ec9a98b51d93)\"" pod="openstack/root-account-create-update-jbklw" podUID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.421634 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4d3f-account-create-update-q7fvm"] Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.421773 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.426379 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.434287 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6c0c6abd-9d45-4022-aca3-5e63949d1aab" (UID: "6c0c6abd-9d45-4022-aca3-5e63949d1aab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.437534 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-config-data" (OuterVolumeSpecName: "config-data") pod "983e4ee8-36de-4b90-b18b-eed4db804a3d" (UID: "983e4ee8-36de-4b90-b18b-eed4db804a3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.437525 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.437810 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="033973ad-b5ce-4136-92d2-0a2b976324db" containerName="nova-scheduler-scheduler" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.439759 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6576bc4c77-zzdbj" event={"ID":"983e4ee8-36de-4b90-b18b-eed4db804a3d","Type":"ContainerDied","Data":"fe049e21007d27706741459ab513d6445c87271929bdfdc42770f1165dfd1877"} Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.440315 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6576bc4c77-zzdbj" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447107 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4d3f-account-create-update-wfrqh"] Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447667 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" containerName="mysql-bootstrap" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447707 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" containerName="mysql-bootstrap" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447718 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11dc2db-1f91-4ec6-9efd-333fcafface4" containerName="placement-api" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447725 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11dc2db-1f91-4ec6-9efd-333fcafface4" containerName="placement-api" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447732 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerName="proxy-server" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447740 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerName="proxy-server" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447752 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" containerName="glance-httpd" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447759 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" containerName="glance-httpd" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447771 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerName="barbican-api" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447777 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerName="barbican-api" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447785 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80869f0d-0e2c-4235-b5e0-3519e6c95ded" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447791 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="80869f0d-0e2c-4235-b5e0-3519e6c95ded" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447800 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerName="cinder-api" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447806 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerName="cinder-api" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447813 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f99f63-df74-4392-a5fc-bf090571266f" containerName="openstack-network-exporter" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447821 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f99f63-df74-4392-a5fc-bf090571266f" containerName="openstack-network-exporter" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447834 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" containerName="glance-httpd" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447840 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" containerName="glance-httpd" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447849 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9317f553-2101-4507-8f08-52e23105b5c1" containerName="openstack-network-exporter" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447854 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9317f553-2101-4507-8f08-52e23105b5c1" containerName="openstack-network-exporter" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447867 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86527c2-480f-4508-be25-9b2eab1f4274" containerName="barbican-worker" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447873 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86527c2-480f-4508-be25-9b2eab1f4274" containerName="barbican-worker" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447882 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerName="proxy-httpd" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447889 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerName="proxy-httpd" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447901 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerName="barbican-api-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447907 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerName="barbican-api-log" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447919 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434da13f-30c5-4464-9b48-3d93ec7762d0" containerName="openstack-network-exporter" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447925 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="434da13f-30c5-4464-9b48-3d93ec7762d0" containerName="openstack-network-exporter" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447933 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-metadata" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447939 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-metadata" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447946 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" containerName="glance-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447953 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" containerName="glance-log" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447961 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerName="cinder-api-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447966 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerName="cinder-api-log" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447973 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" containerName="nova-api-api" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447980 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" containerName="nova-api-api" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.447991 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" containerName="barbican-keystone-listener" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.447998 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" containerName="barbican-keystone-listener" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448006 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11dc2db-1f91-4ec6-9efd-333fcafface4" containerName="placement-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448013 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11dc2db-1f91-4ec6-9efd-333fcafface4" containerName="placement-log" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448070 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8174d3dc-0931-484a-850f-3649234ef9fc" containerName="ovn-controller" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448094 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8174d3dc-0931-484a-850f-3649234ef9fc" containerName="ovn-controller" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448101 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448107 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-log" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448119 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" containerName="barbican-keystone-listener-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448126 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" containerName="barbican-keystone-listener-log" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448133 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" containerName="glance-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448139 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" containerName="glance-log" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448150 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d2026b-e43c-47d5-ad78-e532a664f033" containerName="init" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448157 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d2026b-e43c-47d5-ad78-e532a664f033" containerName="init" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448165 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86527c2-480f-4508-be25-9b2eab1f4274" containerName="barbican-worker-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448172 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86527c2-480f-4508-be25-9b2eab1f4274" containerName="barbican-worker-log" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448179 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d2026b-e43c-47d5-ad78-e532a664f033" containerName="dnsmasq-dns" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448185 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d2026b-e43c-47d5-ad78-e532a664f033" containerName="dnsmasq-dns" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448196 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f99f63-df74-4392-a5fc-bf090571266f" containerName="ovsdbserver-nb" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448203 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f99f63-df74-4392-a5fc-bf090571266f" containerName="ovsdbserver-nb" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448214 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" containerName="nova-api-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448220 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" containerName="nova-api-log" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448232 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434da13f-30c5-4464-9b48-3d93ec7762d0" containerName="ovsdbserver-sb" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448238 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="434da13f-30c5-4464-9b48-3d93ec7762d0" containerName="ovsdbserver-sb" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448248 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a7982e-25f8-4f97-9db5-1c828835ae84" containerName="nova-cell1-conductor-conductor" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448254 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a7982e-25f8-4f97-9db5-1c828835ae84" containerName="nova-cell1-conductor-conductor" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.448268 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" containerName="galera" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.448274 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" containerName="galera" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449087 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerName="proxy-httpd" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449118 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="434da13f-30c5-4464-9b48-3d93ec7762d0" containerName="openstack-network-exporter" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449131 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" containerName="barbican-keystone-listener" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449140 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11dc2db-1f91-4ec6-9efd-333fcafface4" containerName="placement-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449150 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff813b46-2db4-46af-ad1b-3e84fcb8e33b" containerName="barbican-keystone-listener-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449161 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f99f63-df74-4392-a5fc-bf090571266f" containerName="openstack-network-exporter" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449183 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="9317f553-2101-4507-8f08-52e23105b5c1" containerName="openstack-network-exporter" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449193 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerName="barbican-api-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449212 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f99f63-df74-4392-a5fc-bf090571266f" containerName="ovsdbserver-nb" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449222 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" containerName="barbican-api" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449235 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" containerName="glance-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449249 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11dc2db-1f91-4ec6-9efd-333fcafface4" containerName="placement-api" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449260 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e86527c2-480f-4508-be25-9b2eab1f4274" containerName="barbican-worker" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449267 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerName="proxy-server" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449276 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" containerName="glance-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449283 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e86527c2-480f-4508-be25-9b2eab1f4274" containerName="barbican-worker-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449290 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a7982e-25f8-4f97-9db5-1c828835ae84" containerName="nova-cell1-conductor-conductor" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449299 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" containerName="nova-api-api" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449311 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerName="cinder-api-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449320 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d2026b-e43c-47d5-ad78-e532a664f033" containerName="dnsmasq-dns" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449329 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="54834f39-7569-4cf3-812d-2c6d1bd161b8" containerName="cinder-api" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449337 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" containerName="nova-api-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449346 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" containerName="glance-httpd" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449359 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" containerName="glance-httpd" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449368 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="80869f0d-0e2c-4235-b5e0-3519e6c95ded" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449435 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8174d3dc-0931-484a-850f-3649234ef9fc" containerName="ovn-controller" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449449 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-metadata" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449464 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" containerName="nova-metadata-log" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449473 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" containerName="galera" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.449485 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="434da13f-30c5-4464-9b48-3d93ec7762d0" containerName="ovsdbserver-sb" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.450181 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.455016 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zb7v6"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.455458 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "983e4ee8-36de-4b90-b18b-eed4db804a3d" (UID: "983e4ee8-36de-4b90-b18b-eed4db804a3d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.455549 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.459978 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.464981 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "11669bb7-2e25-4817-a4e8-a487ea5b90cb" (UID: "11669bb7-2e25-4817-a4e8-a487ea5b90cb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.471692 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zb7v6"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.478476 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-l5x88"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.483263 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-config-data" (OuterVolumeSpecName: "config-data") pod "4c2c8ddc-f82a-4cca-8a84-90c5713754cf" (UID: "4c2c8ddc-f82a-4cca-8a84-90c5713754cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.483349 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc1f97e4-be98-4c2a-b819-17d9c3b0be51" (UID: "dc1f97e4-be98-4c2a-b819-17d9c3b0be51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.483613 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9b4698bd-747dm" event={"ID":"6c0c6abd-9d45-4022-aca3-5e63949d1aab","Type":"ContainerDied","Data":"4b0c2f55e01d555c1150bee907fd245b8820378bbca1e93c3ce49ef7391741ea"} Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.483757 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9b4698bd-747dm" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.483882 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-combined-ca-bundle\") pod \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\" (UID: \"dc1f97e4-be98-4c2a-b819-17d9c3b0be51\") " Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.484297 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa3282d-5044-490b-be8e-5b721c49d338-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.484326 5034 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11669bb7-2e25-4817-a4e8-a487ea5b90cb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.484340 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.484355 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.484367 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.484380 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.484397 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.484413 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.484425 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983e4ee8-36de-4b90-b18b-eed4db804a3d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: W0105 22:16:01.484547 5034 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/dc1f97e4-be98-4c2a-b819-17d9c3b0be51/volumes/kubernetes.io~secret/combined-ca-bundle Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.484563 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc1f97e4-be98-4c2a-b819-17d9c3b0be51" (UID: "dc1f97e4-be98-4c2a-b819-17d9c3b0be51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.485872 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-l5x88"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.489116 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eaa3282d-5044-490b-be8e-5b721c49d338","Type":"ContainerDied","Data":"237e67627db05d97c5134eb5b774b3cb669a3a4dde82b6b8687f0fb29b15fa8b"} Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.489133 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.500518 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4d3f-account-create-update-wfrqh"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.502408 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4c2c8ddc-f82a-4cca-8a84-90c5713754cf" (UID: "4c2c8ddc-f82a-4cca-8a84-90c5713754cf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.505959 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.509478 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dc1f97e4-be98-4c2a-b819-17d9c3b0be51" (UID: "dc1f97e4-be98-4c2a-b819-17d9c3b0be51"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.513951 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.520647 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-664f75f5b6-lz6hv"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.528575 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7c695fbb7-pzj94"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.528822 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7c695fbb7-pzj94" podUID="da85883d-cfc8-4e82-ad5d-f0889f79b7c3" containerName="keystone-api" containerID="cri-o://939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa" gracePeriod=30 Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.549186 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-664f75f5b6-lz6hv"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.550541 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6c0c6abd-9d45-4022-aca3-5e63949d1aab" (UID: "6c0c6abd-9d45-4022-aca3-5e63949d1aab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.562172 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.579364 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2xs4w"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.586327 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpzz\" (UniqueName: \"kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz\") pod \"keystone-4d3f-account-create-update-wfrqh\" (UID: \"e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6\") " pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.586382 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts\") pod \"keystone-4d3f-account-create-update-wfrqh\" (UID: \"e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6\") " pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.586443 5034 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.586467 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2c8ddc-f82a-4cca-8a84-90c5713754cf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.586480 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.586490 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0c6abd-9d45-4022-aca3-5e63949d1aab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.586498 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.586520 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-operator-scripts podName:0623db6b-2e6a-4739-8c7f-ec9a98b51d93 nodeName:}" failed. No retries permitted until 2026-01-05 22:16:02.086500499 +0000 UTC m=+1454.458499928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-operator-scripts") pod "root-account-create-update-jbklw" (UID: "0623db6b-2e6a-4739-8c7f-ec9a98b51d93") : configmap "openstack-scripts" not found Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.587378 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2xs4w"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.599859 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.602447 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-config-data" (OuterVolumeSpecName: "config-data") pod "dc1f97e4-be98-4c2a-b819-17d9c3b0be51" (UID: "dc1f97e4-be98-4c2a-b819-17d9c3b0be51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.613027 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4d3f-account-create-update-wfrqh"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.626353 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jbklw"] Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.655201 5034 scope.go:117] "RemoveContainer" containerID="c73e4953491ec9f47f29267b9f26809a0789ba4fdcd8a63a9120ac77e00f3874" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.687641 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkpzz\" (UniqueName: \"kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz\") pod \"keystone-4d3f-account-create-update-wfrqh\" (UID: \"e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6\") " pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.687702 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts\") pod \"keystone-4d3f-account-create-update-wfrqh\" (UID: \"e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6\") " pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.687800 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1f97e4-be98-4c2a-b819-17d9c3b0be51-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.687871 5034 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.687919 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts podName:e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6 nodeName:}" failed. No retries permitted until 2026-01-05 22:16:02.187904385 +0000 UTC m=+1454.559903824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts") pod "keystone-4d3f-account-create-update-wfrqh" (UID: "e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6") : configmap "openstack-scripts" not found Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.717879 5034 projected.go:194] Error preparing data for projected volume kube-api-access-xkpzz for pod openstack/keystone-4d3f-account-create-update-wfrqh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.718004 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz podName:e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6 nodeName:}" failed. No retries permitted until 2026-01-05 22:16:02.217968898 +0000 UTC m=+1454.589968337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xkpzz" (UniqueName: "kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz") pod "keystone-4d3f-account-create-update-wfrqh" (UID: "e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.812911 5034 scope.go:117] "RemoveContainer" containerID="4ebce1e8d8500a36a9885aca5996773d3f25ae62e84300ef4e6448cbe1e4b976" Jan 05 22:16:01 crc kubenswrapper[5034]: I0105 22:16:01.855362 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6c458b9699-9b8w4" podUID="5b457464-69a5-4e13-88a9-9e23250402d1" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9696/\": dial tcp 10.217.0.153:9696: connect: connection refused" Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.897261 5034 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 05 22:16:01 crc kubenswrapper[5034]: E0105 22:16:01.897344 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data podName:65a6b236-e04b-494a-a18e-5d1a8a5ae02a nodeName:}" failed. No retries permitted until 2026-01-05 22:16:09.897326095 +0000 UTC m=+1462.269325534 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data") pod "rabbitmq-cell1-server-0" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a") : configmap "rabbitmq-cell1-config-data" not found Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.079427 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.080846 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.082211 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.082251 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="52dac0d7-1025-49a8-8130-1f0d5050331c" containerName="nova-cell0-conductor-conductor" Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.107412 5034 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.107496 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-operator-scripts podName:0623db6b-2e6a-4739-8c7f-ec9a98b51d93 nodeName:}" failed. No retries permitted until 2026-01-05 22:16:03.107474505 +0000 UTC m=+1455.479474004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-operator-scripts") pod "root-account-create-update-jbklw" (UID: "0623db6b-2e6a-4739-8c7f-ec9a98b51d93") : configmap "openstack-scripts" not found Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.117742 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerName="galera" containerID="cri-o://ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365" gracePeriod=30 Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.163763 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d670c8-6fba-43a1-a8e8-9bca9742792d" path="/var/lib/kubelet/pods/29d670c8-6fba-43a1-a8e8-9bca9742792d/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.164633 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bc6c217-9ff1-47b6-a60d-9029e501d9e0" path="/var/lib/kubelet/pods/2bc6c217-9ff1-47b6-a60d-9029e501d9e0/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.165166 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="352134f8-9e6a-487a-8afd-b70ab941cd17" path="/var/lib/kubelet/pods/352134f8-9e6a-487a-8afd-b70ab941cd17/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.165581 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54834f39-7569-4cf3-812d-2c6d1bd161b8" path="/var/lib/kubelet/pods/54834f39-7569-4cf3-812d-2c6d1bd161b8/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.166728 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb498b0-229a-430a-8fb9-4311f3c7cd88" path="/var/lib/kubelet/pods/8bb498b0-229a-430a-8fb9-4311f3c7cd88/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.167325 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d62b8ca-f71a-424f-bbee-cc709c382ba9" path="/var/lib/kubelet/pods/8d62b8ca-f71a-424f-bbee-cc709c382ba9/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.167747 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a7982e-25f8-4f97-9db5-1c828835ae84" path="/var/lib/kubelet/pods/a4a7982e-25f8-4f97-9db5-1c828835ae84/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.168922 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0c349d-e74e-49eb-ba86-8a435d15ba66" path="/var/lib/kubelet/pods/bb0c349d-e74e-49eb-ba86-8a435d15ba66/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.169550 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11dc2db-1f91-4ec6-9efd-333fcafface4" path="/var/lib/kubelet/pods/d11dc2db-1f91-4ec6-9efd-333fcafface4/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.170162 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e86527c2-480f-4508-be25-9b2eab1f4274" path="/var/lib/kubelet/pods/e86527c2-480f-4508-be25-9b2eab1f4274/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.171336 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa9b2abe-27f2-42c1-b085-c58641532b1a" path="/var/lib/kubelet/pods/fa9b2abe-27f2-42c1-b085-c58641532b1a/volumes" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.209405 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts\") pod \"keystone-4d3f-account-create-update-wfrqh\" (UID: \"e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6\") " pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.209582 5034 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.209636 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts podName:e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6 nodeName:}" failed. No retries permitted until 2026-01-05 22:16:03.209619202 +0000 UTC m=+1455.581618641 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts") pod "keystone-4d3f-account-create-update-wfrqh" (UID: "e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6") : configmap "openstack-scripts" not found Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.270466 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.272472 5034 scope.go:117] "RemoveContainer" containerID="1b21cd994fe20b3b5d50504e11ca0a8e875e6198715a73ad6b85c6ab13ae3f03" Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.276707 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-xkpzz operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-4d3f-account-create-update-wfrqh" podUID="e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.286225 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.298534 5034 scope.go:117] "RemoveContainer" containerID="3291212e6637ec18d4d97f490e2b13376227e40062c2978ce122475ac1ccee20" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.312383 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkpzz\" (UniqueName: \"kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz\") pod \"keystone-4d3f-account-create-update-wfrqh\" (UID: \"e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6\") " pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.318881 5034 projected.go:194] Error preparing data for projected volume kube-api-access-xkpzz for pod openstack/keystone-4d3f-account-create-update-wfrqh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.318972 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz podName:e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6 nodeName:}" failed. No retries permitted until 2026-01-05 22:16:03.318937762 +0000 UTC m=+1455.690937191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xkpzz" (UniqueName: "kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz") pod "keystone-4d3f-account-create-update-wfrqh" (UID: "e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.350159 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.360789 5034 scope.go:117] "RemoveContainer" containerID="d0c499f0a927479b340ab820f58c0578043492c63baf9a7836426b0f832cdd3a" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.369983 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.397203 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.412814 5034 scope.go:117] "RemoveContainer" containerID="70ee7ba0dcf1db4ff6a1836f1e8d9db65589363dab8ad409a064b32b276d7892" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.415728 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-config\") pod \"5c91a26e-489c-40b8-bf4b-b60f65431df0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.415811 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-certs\") pod \"5c91a26e-489c-40b8-bf4b-b60f65431df0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.415858 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xzhh\" (UniqueName: \"kubernetes.io/projected/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-api-access-2xzhh\") pod \"5c91a26e-489c-40b8-bf4b-b60f65431df0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.415927 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-combined-ca-bundle\") pod \"5c91a26e-489c-40b8-bf4b-b60f65431df0\" (UID: \"5c91a26e-489c-40b8-bf4b-b60f65431df0\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.425463 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6576bc4c77-zzdbj"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.431747 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-api-access-2xzhh" (OuterVolumeSpecName: "kube-api-access-2xzhh") pod "5c91a26e-489c-40b8-bf4b-b60f65431df0" (UID: "5c91a26e-489c-40b8-bf4b-b60f65431df0"). InnerVolumeSpecName "kube-api-access-2xzhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.440377 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6576bc4c77-zzdbj"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.449998 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c91a26e-489c-40b8-bf4b-b60f65431df0" (UID: "5c91a26e-489c-40b8-bf4b-b60f65431df0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.491342 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b9b4698bd-747dm"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.496724 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "5c91a26e-489c-40b8-bf4b-b60f65431df0" (UID: "5c91a26e-489c-40b8-bf4b-b60f65431df0"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.508464 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b9b4698bd-747dm"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.518462 5034 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.518501 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xzhh\" (UniqueName: \"kubernetes.io/projected/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-api-access-2xzhh\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.518522 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.524595 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.532247 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.538437 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.541615 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911 is running failed: container process not found" containerID="0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.542263 5034 generic.go:334] "Generic (PLEG): container finished" podID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" containerID="6142e99eab6f8d5fa2aa4392f035c3a6396193c921db5594487e88a07ec633b0" exitCode=0 Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.542314 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94526d3f-1e21-4eef-abb7-5cd05bfb1670","Type":"ContainerDied","Data":"6142e99eab6f8d5fa2aa4392f035c3a6396193c921db5594487e88a07ec633b0"} Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.542338 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94526d3f-1e21-4eef-abb7-5cd05bfb1670","Type":"ContainerDied","Data":"56b403a81f5e53425e61c41468fd91f9f162fe94a0fcd2b29ebbf4b18ee6b855"} Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.542349 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b403a81f5e53425e61c41468fd91f9f162fe94a0fcd2b29ebbf4b18ee6b855" Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.542445 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911 is running failed: container process not found" containerID="0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.542996 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911 is running failed: container process not found" containerID="0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.543034 5034 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" containerName="ovn-northd" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.545796 5034 generic.go:334] "Generic (PLEG): container finished" podID="5c91a26e-489c-40b8-bf4b-b60f65431df0" containerID="993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8" exitCode=2 Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.546100 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5c91a26e-489c-40b8-bf4b-b60f65431df0","Type":"ContainerDied","Data":"993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8"} Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.546148 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5c91a26e-489c-40b8-bf4b-b60f65431df0","Type":"ContainerDied","Data":"fa8f608c10f02c3a911a3f3f9c03f1342a44666905c20c802565250bd054c327"} Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.546219 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.549193 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "5c91a26e-489c-40b8-bf4b-b60f65431df0" (UID: "5c91a26e-489c-40b8-bf4b-b60f65431df0"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.551001 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.561012 5034 generic.go:334] "Generic (PLEG): container finished" podID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerID="9e1dc00f3e2493bdf5a7760277688af58c81bbb233be802e5fd2fc891a6f89a3" exitCode=0 Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.561145 5034 generic.go:334] "Generic (PLEG): container finished" podID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerID="f1d141763da55b0e46082e6afd0859a1080cd136929e067948b616a2530eac31" exitCode=0 Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.561242 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71dab1f9-0430-4516-8eed-265cfd0c5be9","Type":"ContainerDied","Data":"9e1dc00f3e2493bdf5a7760277688af58c81bbb233be802e5fd2fc891a6f89a3"} Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.561358 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71dab1f9-0430-4516-8eed-265cfd0c5be9","Type":"ContainerDied","Data":"f1d141763da55b0e46082e6afd0859a1080cd136929e067948b616a2530eac31"} Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.570422 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.621617 5034 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c91a26e-489c-40b8-bf4b-b60f65431df0-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.679829 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.680405 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.688342 5034 scope.go:117] "RemoveContainer" containerID="ac34648fd83cfc034ae26bdfef9f0975cae1c33e8e010cf1a87f4c7182f7691b" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.738188 5034 scope.go:117] "RemoveContainer" containerID="f1b3c35f63294d582fd4b25a3dbec8507d92d98c35739809ed98988da8b876c1" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.763096 5034 scope.go:117] "RemoveContainer" containerID="6800fbb56148e87618fda2df370bdf264e113c0b015622bf898f8261f8fafda1" Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.784751 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.786633 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.788888 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 05 22:16:02 crc kubenswrapper[5034]: E0105 22:16:02.788934 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerName="galera" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.795318 5034 scope.go:117] "RemoveContainer" containerID="dc580afcb0d5964a6770e1805a08f6ab8ba168d592cf15a21e4431f5b1c61076" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.825943 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.826031 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94526d3f-1e21-4eef-abb7-5cd05bfb1670-erlang-cookie-secret\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.826193 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-tls\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.826313 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-plugins-conf\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.826388 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-plugins\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.826443 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-server-conf\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.826482 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-erlang-cookie\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.826513 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjjlb\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-kube-api-access-kjjlb\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.826565 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.826605 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94526d3f-1e21-4eef-abb7-5cd05bfb1670-pod-info\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.826696 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-confd\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.829876 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.832128 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.832245 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.832462 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.834589 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.835514 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-kube-api-access-kjjlb" (OuterVolumeSpecName: "kube-api-access-kjjlb") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "kube-api-access-kjjlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.843568 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/94526d3f-1e21-4eef-abb7-5cd05bfb1670-pod-info" (OuterVolumeSpecName: "pod-info") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.845613 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94526d3f-1e21-4eef-abb7-5cd05bfb1670-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.850869 5034 scope.go:117] "RemoveContainer" containerID="e71299f8473ea6e97ab3f521671935fa9ae99d0a935b71e63ce2ceb108169b56" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.883649 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-server-conf" (OuterVolumeSpecName: "server-conf") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.906042 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data" (OuterVolumeSpecName: "config-data") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.906066 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.914026 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.915431 5034 scope.go:117] "RemoveContainer" containerID="60e0dc06f5d11e4ea971c7f7cf856032cf0326af71dd5f35ab34721c3f181e11" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.929033 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.929058 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjjlb\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-kube-api-access-kjjlb\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.929067 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.929094 5034 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94526d3f-1e21-4eef-abb7-5cd05bfb1670-pod-info\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.929120 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.929132 5034 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94526d3f-1e21-4eef-abb7-5cd05bfb1670-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.929145 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.929157 5034 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.929167 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.929178 5034 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94526d3f-1e21-4eef-abb7-5cd05bfb1670-server-conf\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.948673 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.988282 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_eda1f147-b2fb-4349-ba17-674073870a4b/ovn-northd/0.log" Jan 05 22:16:02 crc kubenswrapper[5034]: I0105 22:16:02.988368 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.019912 5034 scope.go:117] "RemoveContainer" containerID="993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.029340 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.030235 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-confd\") pod \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\" (UID: \"94526d3f-1e21-4eef-abb7-5cd05bfb1670\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.030762 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: W0105 22:16:03.030768 5034 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/94526d3f-1e21-4eef-abb7-5cd05bfb1670/volumes/kubernetes.io~projected/rabbitmq-confd Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.030871 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "94526d3f-1e21-4eef-abb7-5cd05bfb1670" (UID: "94526d3f-1e21-4eef-abb7-5cd05bfb1670"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.042849 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jbklw" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.046358 5034 scope.go:117] "RemoveContainer" containerID="993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8" Jan 05 22:16:03 crc kubenswrapper[5034]: E0105 22:16:03.046747 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8\": container with ID starting with 993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8 not found: ID does not exist" containerID="993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.046789 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8"} err="failed to get container status \"993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8\": rpc error: code = NotFound desc = could not find container \"993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8\": container with ID starting with 993a68feba0ee1a164728bf2b9abc7f615f00ab30e22112aad03160d245517c8 not found: ID does not exist" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.131333 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-config\") pod \"eda1f147-b2fb-4349-ba17-674073870a4b\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.131406 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-scripts\") pod \"eda1f147-b2fb-4349-ba17-674073870a4b\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.131426 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-northd-tls-certs\") pod \"eda1f147-b2fb-4349-ba17-674073870a4b\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.131470 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-rundir\") pod \"eda1f147-b2fb-4349-ba17-674073870a4b\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.131507 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-combined-ca-bundle\") pod \"eda1f147-b2fb-4349-ba17-674073870a4b\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.131572 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ld8q\" (UniqueName: \"kubernetes.io/projected/eda1f147-b2fb-4349-ba17-674073870a4b-kube-api-access-2ld8q\") pod \"eda1f147-b2fb-4349-ba17-674073870a4b\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.131595 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-operator-scripts\") pod \"0623db6b-2e6a-4739-8c7f-ec9a98b51d93\" (UID: \"0623db6b-2e6a-4739-8c7f-ec9a98b51d93\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.131962 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "eda1f147-b2fb-4349-ba17-674073870a4b" (UID: "eda1f147-b2fb-4349-ba17-674073870a4b"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.132035 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-config" (OuterVolumeSpecName: "config") pod "eda1f147-b2fb-4349-ba17-674073870a4b" (UID: "eda1f147-b2fb-4349-ba17-674073870a4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.132043 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-scripts" (OuterVolumeSpecName: "scripts") pod "eda1f147-b2fb-4349-ba17-674073870a4b" (UID: "eda1f147-b2fb-4349-ba17-674073870a4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.132256 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tfd6\" (UniqueName: \"kubernetes.io/projected/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-kube-api-access-7tfd6\") pod \"0623db6b-2e6a-4739-8c7f-ec9a98b51d93\" (UID: \"0623db6b-2e6a-4739-8c7f-ec9a98b51d93\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.132296 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-metrics-certs-tls-certs\") pod \"eda1f147-b2fb-4349-ba17-674073870a4b\" (UID: \"eda1f147-b2fb-4349-ba17-674073870a4b\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.132643 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0623db6b-2e6a-4739-8c7f-ec9a98b51d93" (UID: "0623db6b-2e6a-4739-8c7f-ec9a98b51d93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.132718 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.132732 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eda1f147-b2fb-4349-ba17-674073870a4b-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.132746 5034 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.132756 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94526d3f-1e21-4eef-abb7-5cd05bfb1670-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.132768 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.136122 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-kube-api-access-7tfd6" (OuterVolumeSpecName: "kube-api-access-7tfd6") pod "0623db6b-2e6a-4739-8c7f-ec9a98b51d93" (UID: "0623db6b-2e6a-4739-8c7f-ec9a98b51d93"). InnerVolumeSpecName "kube-api-access-7tfd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.145655 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda1f147-b2fb-4349-ba17-674073870a4b-kube-api-access-2ld8q" (OuterVolumeSpecName: "kube-api-access-2ld8q") pod "eda1f147-b2fb-4349-ba17-674073870a4b" (UID: "eda1f147-b2fb-4349-ba17-674073870a4b"). InnerVolumeSpecName "kube-api-access-2ld8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.156964 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eda1f147-b2fb-4349-ba17-674073870a4b" (UID: "eda1f147-b2fb-4349-ba17-674073870a4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.217074 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "eda1f147-b2fb-4349-ba17-674073870a4b" (UID: "eda1f147-b2fb-4349-ba17-674073870a4b"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.234033 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts\") pod \"keystone-4d3f-account-create-update-wfrqh\" (UID: \"e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6\") " pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.234133 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tfd6\" (UniqueName: \"kubernetes.io/projected/0623db6b-2e6a-4739-8c7f-ec9a98b51d93-kube-api-access-7tfd6\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.234146 5034 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.234155 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.234163 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ld8q\" (UniqueName: \"kubernetes.io/projected/eda1f147-b2fb-4349-ba17-674073870a4b-kube-api-access-2ld8q\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: E0105 22:16:03.234221 5034 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 05 22:16:03 crc kubenswrapper[5034]: E0105 22:16:03.234267 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts podName:e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6 nodeName:}" failed. No retries permitted until 2026-01-05 22:16:05.234253094 +0000 UTC m=+1457.606252533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts") pod "keystone-4d3f-account-create-update-wfrqh" (UID: "e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6") : configmap "openstack-scripts" not found Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.255755 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "eda1f147-b2fb-4349-ba17-674073870a4b" (UID: "eda1f147-b2fb-4349-ba17-674073870a4b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.337443 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkpzz\" (UniqueName: \"kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz\") pod \"keystone-4d3f-account-create-update-wfrqh\" (UID: \"e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6\") " pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.337632 5034 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda1f147-b2fb-4349-ba17-674073870a4b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: E0105 22:16:03.344147 5034 projected.go:194] Error preparing data for projected volume kube-api-access-xkpzz for pod openstack/keystone-4d3f-account-create-update-wfrqh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 05 22:16:03 crc kubenswrapper[5034]: E0105 22:16:03.344225 5034 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz podName:e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6 nodeName:}" failed. No retries permitted until 2026-01-05 22:16:05.344204152 +0000 UTC m=+1457.716203651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xkpzz" (UniqueName: "kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz") pod "keystone-4d3f-account-create-update-wfrqh" (UID: "e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.387013 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.540450 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-confd\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.540547 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrsht\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-kube-api-access-nrsht\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.541181 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.541308 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-server-conf\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.541330 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-plugins-conf\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.541383 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-tls\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.541458 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-erlang-cookie-secret\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.541492 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-erlang-cookie\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.541532 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-plugins\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.541584 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-pod-info\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.541612 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data\") pod \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\" (UID: \"65a6b236-e04b-494a-a18e-5d1a8a5ae02a\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.548458 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.548962 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.549425 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.551683 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-kube-api-access-nrsht" (OuterVolumeSpecName: "kube-api-access-nrsht") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "kube-api-access-nrsht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.559403 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-pod-info" (OuterVolumeSpecName: "pod-info") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.559621 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.559728 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.587208 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data" (OuterVolumeSpecName: "config-data") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.587356 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.597365 5034 generic.go:334] "Generic (PLEG): container finished" podID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" containerID="2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052" exitCode=0 Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.597422 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a6b236-e04b-494a-a18e-5d1a8a5ae02a","Type":"ContainerDied","Data":"2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052"} Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.597445 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a6b236-e04b-494a-a18e-5d1a8a5ae02a","Type":"ContainerDied","Data":"3fb09f69dbeb5387bee2bc2e11251f87022c70b7241105fe9a0b4a447d121ae2"} Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.597463 5034 scope.go:117] "RemoveContainer" containerID="2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.597586 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.604993 5034 generic.go:334] "Generic (PLEG): container finished" podID="44fc54fc-2187-4b43-8e20-e8c84b8f54d3" containerID="663dddc4729d62810041b4ac300dd6293f55ca190f90f1a3e6f6b67eea444427" exitCode=0 Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.605109 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"44fc54fc-2187-4b43-8e20-e8c84b8f54d3","Type":"ContainerDied","Data":"663dddc4729d62810041b4ac300dd6293f55ca190f90f1a3e6f6b67eea444427"} Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.613359 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jbklw" event={"ID":"0623db6b-2e6a-4739-8c7f-ec9a98b51d93","Type":"ContainerDied","Data":"3b73b163629fa502e921868df8cd950a0b863245f910c7a9e066da1e9ac99e47"} Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.613541 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jbklw" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.620490 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_eda1f147-b2fb-4349-ba17-674073870a4b/ovn-northd/0.log" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.620529 5034 generic.go:334] "Generic (PLEG): container finished" podID="eda1f147-b2fb-4349-ba17-674073870a4b" containerID="0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" exitCode=139 Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.620578 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eda1f147-b2fb-4349-ba17-674073870a4b","Type":"ContainerDied","Data":"0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911"} Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.620597 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eda1f147-b2fb-4349-ba17-674073870a4b","Type":"ContainerDied","Data":"2152d44185d9dfb37a3903800fd2ffa17aff3c4304dd5c1ec3c12cb421f6845a"} Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.620657 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.624571 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d3f-account-create-update-wfrqh" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.624702 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.637778 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-server-conf" (OuterVolumeSpecName: "server-conf") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.646289 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.646319 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.646328 5034 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.646336 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.646345 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrsht\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-kube-api-access-nrsht\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.646370 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.646379 5034 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.646389 5034 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.646397 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.646459 5034 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.670948 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.688960 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.693523 5034 scope.go:117] "RemoveContainer" containerID="5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.725321 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "65a6b236-e04b-494a-a18e-5d1a8a5ae02a" (UID: "65a6b236-e04b-494a-a18e-5d1a8a5ae02a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.758915 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mzs7\" (UniqueName: \"kubernetes.io/projected/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kube-api-access-8mzs7\") pod \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.759064 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kolla-config\") pod \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.759189 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-combined-ca-bundle\") pod \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.759237 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-config-data\") pod \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.759353 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-memcached-tls-certs\") pod \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\" (UID: \"44fc54fc-2187-4b43-8e20-e8c84b8f54d3\") " Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.775998 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65a6b236-e04b-494a-a18e-5d1a8a5ae02a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.776071 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.776596 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "44fc54fc-2187-4b43-8e20-e8c84b8f54d3" (UID: "44fc54fc-2187-4b43-8e20-e8c84b8f54d3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.777214 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-config-data" (OuterVolumeSpecName: "config-data") pod "44fc54fc-2187-4b43-8e20-e8c84b8f54d3" (UID: "44fc54fc-2187-4b43-8e20-e8c84b8f54d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.802298 5034 scope.go:117] "RemoveContainer" containerID="2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052" Jan 05 22:16:03 crc kubenswrapper[5034]: E0105 22:16:03.802990 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052\": container with ID starting with 2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052 not found: ID does not exist" containerID="2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.803038 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052"} err="failed to get container status \"2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052\": rpc error: code = NotFound desc = could not find container \"2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052\": container with ID starting with 2c32e12d91e057972fe3908ec628fc5d7eb2c1c20039d3bc5c0890e57897c052 not found: ID does not exist" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.803060 5034 scope.go:117] "RemoveContainer" containerID="5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518" Jan 05 22:16:03 crc kubenswrapper[5034]: E0105 22:16:03.806316 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518\": container with ID starting with 5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518 not found: ID does not exist" containerID="5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.806365 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518"} err="failed to get container status \"5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518\": rpc error: code = NotFound desc = could not find container \"5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518\": container with ID starting with 5bc69079183378631669b2d346655c7c16eb6ec424114c32675ad61be05d4518 not found: ID does not exist" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.806395 5034 scope.go:117] "RemoveContainer" containerID="8418957d5569499030e540d503f2cad0da02c2f42f6a83cfde9bd58408288a6e" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.810846 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kube-api-access-8mzs7" (OuterVolumeSpecName: "kube-api-access-8mzs7") pod "44fc54fc-2187-4b43-8e20-e8c84b8f54d3" (UID: "44fc54fc-2187-4b43-8e20-e8c84b8f54d3"). InnerVolumeSpecName "kube-api-access-8mzs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.816463 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44fc54fc-2187-4b43-8e20-e8c84b8f54d3" (UID: "44fc54fc-2187-4b43-8e20-e8c84b8f54d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.820970 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.853951 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "44fc54fc-2187-4b43-8e20-e8c84b8f54d3" (UID: "44fc54fc-2187-4b43-8e20-e8c84b8f54d3"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.872668 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11669bb7-2e25-4817-a4e8-a487ea5b90cb" path="/var/lib/kubelet/pods/11669bb7-2e25-4817-a4e8-a487ea5b90cb/volumes" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.873657 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2c8ddc-f82a-4cca-8a84-90c5713754cf" path="/var/lib/kubelet/pods/4c2c8ddc-f82a-4cca-8a84-90c5713754cf/volumes" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.877715 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.877750 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.877760 5034 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.877769 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mzs7\" (UniqueName: \"kubernetes.io/projected/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kube-api-access-8mzs7\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.877788 5034 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44fc54fc-2187-4b43-8e20-e8c84b8f54d3-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.878241 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c91a26e-489c-40b8-bf4b-b60f65431df0" path="/var/lib/kubelet/pods/5c91a26e-489c-40b8-bf4b-b60f65431df0/volumes" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.879051 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c0c6abd-9d45-4022-aca3-5e63949d1aab" path="/var/lib/kubelet/pods/6c0c6abd-9d45-4022-aca3-5e63949d1aab/volumes" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.879802 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" path="/var/lib/kubelet/pods/983e4ee8-36de-4b90-b18b-eed4db804a3d/volumes" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.881252 5034 scope.go:117] "RemoveContainer" containerID="cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.881342 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc1f97e4-be98-4c2a-b819-17d9c3b0be51" path="/var/lib/kubelet/pods/dc1f97e4-be98-4c2a-b819-17d9c3b0be51/volumes" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.882387 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa3282d-5044-490b-be8e-5b721c49d338" path="/var/lib/kubelet/pods/eaa3282d-5044-490b-be8e-5b721c49d338/volumes" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.890356 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.890401 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4d3f-account-create-update-wfrqh"] Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.913747 5034 scope.go:117] "RemoveContainer" containerID="0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.913748 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4d3f-account-create-update-wfrqh"] Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.927169 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.937151 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.953621 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jbklw"] Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.960618 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jbklw"] Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.962720 5034 scope.go:117] "RemoveContainer" containerID="cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17" Jan 05 22:16:03 crc kubenswrapper[5034]: E0105 22:16:03.964826 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17\": container with ID starting with cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17 not found: ID does not exist" containerID="cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.964877 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17"} err="failed to get container status \"cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17\": rpc error: code = NotFound desc = could not find container \"cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17\": container with ID starting with cc869bdf4a5ec1eaf4629f3dd037cdbf707e000b12ae33475a0571111125ba17 not found: ID does not exist" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.964916 5034 scope.go:117] "RemoveContainer" containerID="0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" Jan 05 22:16:03 crc kubenswrapper[5034]: E0105 22:16:03.965605 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911\": container with ID starting with 0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911 not found: ID does not exist" containerID="0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.965652 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911"} err="failed to get container status \"0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911\": rpc error: code = NotFound desc = could not find container \"0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911\": container with ID starting with 0cd4ab68dfd926cf36b05ecd501b412ae1b2ffe883747cb126333b63ab425911 not found: ID does not exist" Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.968460 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 22:16:03 crc kubenswrapper[5034]: I0105 22:16:03.973374 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.082418 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkpzz\" (UniqueName: \"kubernetes.io/projected/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-kube-api-access-xkpzz\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.082451 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.195252 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.284974 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kolla-config\") pod \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.285044 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-operator-scripts\") pod \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.285136 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-default\") pod \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.285279 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-combined-ca-bundle\") pod \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.285392 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9twp\" (UniqueName: \"kubernetes.io/projected/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kube-api-access-v9twp\") pod \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.285442 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-galera-tls-certs\") pod \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.285490 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-generated\") pod \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.285515 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\" (UID: \"8c1a7050-af42-4822-9bcb-cc8ea32bd319\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.286384 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "8c1a7050-af42-4822-9bcb-cc8ea32bd319" (UID: "8c1a7050-af42-4822-9bcb-cc8ea32bd319"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.286908 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "8c1a7050-af42-4822-9bcb-cc8ea32bd319" (UID: "8c1a7050-af42-4822-9bcb-cc8ea32bd319"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.287585 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "8c1a7050-af42-4822-9bcb-cc8ea32bd319" (UID: "8c1a7050-af42-4822-9bcb-cc8ea32bd319"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.289165 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c1a7050-af42-4822-9bcb-cc8ea32bd319" (UID: "8c1a7050-af42-4822-9bcb-cc8ea32bd319"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.297369 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kube-api-access-v9twp" (OuterVolumeSpecName: "kube-api-access-v9twp") pod "8c1a7050-af42-4822-9bcb-cc8ea32bd319" (UID: "8c1a7050-af42-4822-9bcb-cc8ea32bd319"). InnerVolumeSpecName "kube-api-access-v9twp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.324927 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "8c1a7050-af42-4822-9bcb-cc8ea32bd319" (UID: "8c1a7050-af42-4822-9bcb-cc8ea32bd319"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.325793 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c1a7050-af42-4822-9bcb-cc8ea32bd319" (UID: "8c1a7050-af42-4822-9bcb-cc8ea32bd319"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.362560 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "8c1a7050-af42-4822-9bcb-cc8ea32bd319" (UID: "8c1a7050-af42-4822-9bcb-cc8ea32bd319"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.387955 5034 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.388008 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.388019 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.388030 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.388039 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9twp\" (UniqueName: \"kubernetes.io/projected/8c1a7050-af42-4822-9bcb-cc8ea32bd319-kube-api-access-v9twp\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.388048 5034 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1a7050-af42-4822-9bcb-cc8ea32bd319-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.388059 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c1a7050-af42-4822-9bcb-cc8ea32bd319-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.388101 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.406476 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.489607 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.533617 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.541284 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.645613 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"44fc54fc-2187-4b43-8e20-e8c84b8f54d3","Type":"ContainerDied","Data":"723246246b690f3edcaccfe81b8acf9cfe22551ea39ee72470a131e85a9a455c"} Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.646027 5034 scope.go:117] "RemoveContainer" containerID="663dddc4729d62810041b4ac300dd6293f55ca190f90f1a3e6f6b67eea444427" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.645891 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.648669 5034 generic.go:334] "Generic (PLEG): container finished" podID="52dac0d7-1025-49a8-8130-1f0d5050331c" containerID="32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8" exitCode=0 Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.648732 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"52dac0d7-1025-49a8-8130-1f0d5050331c","Type":"ContainerDied","Data":"32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8"} Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.648754 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"52dac0d7-1025-49a8-8130-1f0d5050331c","Type":"ContainerDied","Data":"e8b42db5670e156feb9e636459875f88758809b0ecc2c07dc235fd21ebe04537"} Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.648808 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.650455 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.652965 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.653063 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.658244 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.658516 5034 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server" Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.663016 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.672355 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.672442 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovs-vswitchd" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.673570 5034 generic.go:334] "Generic (PLEG): container finished" podID="033973ad-b5ce-4136-92d2-0a2b976324db" containerID="d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c" exitCode=0 Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.673707 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"033973ad-b5ce-4136-92d2-0a2b976324db","Type":"ContainerDied","Data":"d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c"} Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.673749 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"033973ad-b5ce-4136-92d2-0a2b976324db","Type":"ContainerDied","Data":"dd7d703695c764487172ee10f74d7f1730266da2aae12cfb7c22e200c0fdfc28"} Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.673813 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.676195 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.680824 5034 generic.go:334] "Generic (PLEG): container finished" podID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerID="ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365" exitCode=0 Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.680872 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8c1a7050-af42-4822-9bcb-cc8ea32bd319","Type":"ContainerDied","Data":"ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365"} Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.680904 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8c1a7050-af42-4822-9bcb-cc8ea32bd319","Type":"ContainerDied","Data":"3bfa3d426b16f1d3678ba6270c6462dca7f0d00813fdb0d14f66a92185e7b3ab"} Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.680979 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.684723 5034 scope.go:117] "RemoveContainer" containerID="32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.690486 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.692340 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6fcg\" (UniqueName: \"kubernetes.io/projected/033973ad-b5ce-4136-92d2-0a2b976324db-kube-api-access-r6fcg\") pod \"033973ad-b5ce-4136-92d2-0a2b976324db\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.692442 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-combined-ca-bundle\") pod \"033973ad-b5ce-4136-92d2-0a2b976324db\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.692499 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-combined-ca-bundle\") pod \"52dac0d7-1025-49a8-8130-1f0d5050331c\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.692561 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-config-data\") pod \"52dac0d7-1025-49a8-8130-1f0d5050331c\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.692591 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpq9m\" (UniqueName: \"kubernetes.io/projected/52dac0d7-1025-49a8-8130-1f0d5050331c-kube-api-access-wpq9m\") pod \"52dac0d7-1025-49a8-8130-1f0d5050331c\" (UID: \"52dac0d7-1025-49a8-8130-1f0d5050331c\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.692623 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-config-data\") pod \"033973ad-b5ce-4136-92d2-0a2b976324db\" (UID: \"033973ad-b5ce-4136-92d2-0a2b976324db\") " Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.705249 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52dac0d7-1025-49a8-8130-1f0d5050331c-kube-api-access-wpq9m" (OuterVolumeSpecName: "kube-api-access-wpq9m") pod "52dac0d7-1025-49a8-8130-1f0d5050331c" (UID: "52dac0d7-1025-49a8-8130-1f0d5050331c"). InnerVolumeSpecName "kube-api-access-wpq9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.711218 5034 scope.go:117] "RemoveContainer" containerID="32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.714035 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/033973ad-b5ce-4136-92d2-0a2b976324db-kube-api-access-r6fcg" (OuterVolumeSpecName: "kube-api-access-r6fcg") pod "033973ad-b5ce-4136-92d2-0a2b976324db" (UID: "033973ad-b5ce-4136-92d2-0a2b976324db"). InnerVolumeSpecName "kube-api-access-r6fcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.719210 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8\": container with ID starting with 32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8 not found: ID does not exist" containerID="32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.719261 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8"} err="failed to get container status \"32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8\": rpc error: code = NotFound desc = could not find container \"32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8\": container with ID starting with 32309f99aaa80f5c06a6e043c353b4a9a021a0599fbf6dae2d2df787a2af89d8 not found: ID does not exist" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.719291 5034 scope.go:117] "RemoveContainer" containerID="d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.728521 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-config-data" (OuterVolumeSpecName: "config-data") pod "033973ad-b5ce-4136-92d2-0a2b976324db" (UID: "033973ad-b5ce-4136-92d2-0a2b976324db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.728759 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-config-data" (OuterVolumeSpecName: "config-data") pod "52dac0d7-1025-49a8-8130-1f0d5050331c" (UID: "52dac0d7-1025-49a8-8130-1f0d5050331c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.730558 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "033973ad-b5ce-4136-92d2-0a2b976324db" (UID: "033973ad-b5ce-4136-92d2-0a2b976324db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.736180 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52dac0d7-1025-49a8-8130-1f0d5050331c" (UID: "52dac0d7-1025-49a8-8130-1f0d5050331c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.794492 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6fcg\" (UniqueName: \"kubernetes.io/projected/033973ad-b5ce-4136-92d2-0a2b976324db-kube-api-access-r6fcg\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.794532 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.794551 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.794561 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dac0d7-1025-49a8-8130-1f0d5050331c-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.794569 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpq9m\" (UniqueName: \"kubernetes.io/projected/52dac0d7-1025-49a8-8130-1f0d5050331c-kube-api-access-wpq9m\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.794577 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033973ad-b5ce-4136-92d2-0a2b976324db-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.880632 5034 scope.go:117] "RemoveContainer" containerID="d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c" Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.882459 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c\": container with ID starting with d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c not found: ID does not exist" containerID="d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.882498 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c"} err="failed to get container status \"d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c\": rpc error: code = NotFound desc = could not find container \"d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c\": container with ID starting with d841297339a617c1c7584941642e6afa36248a4b479ff21924be4c57306e320c not found: ID does not exist" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.882527 5034 scope.go:117] "RemoveContainer" containerID="ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.888681 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.898564 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.908976 5034 scope.go:117] "RemoveContainer" containerID="376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.942133 5034 scope.go:117] "RemoveContainer" containerID="ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365" Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.942851 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365\": container with ID starting with ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365 not found: ID does not exist" containerID="ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.942897 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365"} err="failed to get container status \"ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365\": rpc error: code = NotFound desc = could not find container \"ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365\": container with ID starting with ca083dea6cd6207912ddad788297408fea6390bd3efae587e9376dc162de4365 not found: ID does not exist" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.942927 5034 scope.go:117] "RemoveContainer" containerID="376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109" Jan 05 22:16:04 crc kubenswrapper[5034]: E0105 22:16:04.943473 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109\": container with ID starting with 376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109 not found: ID does not exist" containerID="376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.943510 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109"} err="failed to get container status \"376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109\": rpc error: code = NotFound desc = could not find container \"376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109\": container with ID starting with 376b3e3eee57a78a86387b0339b1ba9d4fa29bb928a25488313b2257127bb109 not found: ID does not exist" Jan 05 22:16:04 crc kubenswrapper[5034]: I0105 22:16:04.997132 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.004206 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.017143 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.021975 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.183861 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.303413 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-fernet-keys\") pod \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.303495 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-scripts\") pod \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.303565 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qlj5\" (UniqueName: \"kubernetes.io/projected/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-kube-api-access-5qlj5\") pod \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.303606 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-credential-keys\") pod \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.303632 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-config-data\") pod \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.303660 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-combined-ca-bundle\") pod \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.303712 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-public-tls-certs\") pod \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.303746 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-internal-tls-certs\") pod \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\" (UID: \"da85883d-cfc8-4e82-ad5d-f0889f79b7c3\") " Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.308190 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-scripts" (OuterVolumeSpecName: "scripts") pod "da85883d-cfc8-4e82-ad5d-f0889f79b7c3" (UID: "da85883d-cfc8-4e82-ad5d-f0889f79b7c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.308222 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "da85883d-cfc8-4e82-ad5d-f0889f79b7c3" (UID: "da85883d-cfc8-4e82-ad5d-f0889f79b7c3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.308246 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-kube-api-access-5qlj5" (OuterVolumeSpecName: "kube-api-access-5qlj5") pod "da85883d-cfc8-4e82-ad5d-f0889f79b7c3" (UID: "da85883d-cfc8-4e82-ad5d-f0889f79b7c3"). InnerVolumeSpecName "kube-api-access-5qlj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.312232 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "da85883d-cfc8-4e82-ad5d-f0889f79b7c3" (UID: "da85883d-cfc8-4e82-ad5d-f0889f79b7c3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.337634 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da85883d-cfc8-4e82-ad5d-f0889f79b7c3" (UID: "da85883d-cfc8-4e82-ad5d-f0889f79b7c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.337676 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-config-data" (OuterVolumeSpecName: "config-data") pod "da85883d-cfc8-4e82-ad5d-f0889f79b7c3" (UID: "da85883d-cfc8-4e82-ad5d-f0889f79b7c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.366394 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "da85883d-cfc8-4e82-ad5d-f0889f79b7c3" (UID: "da85883d-cfc8-4e82-ad5d-f0889f79b7c3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.366941 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "da85883d-cfc8-4e82-ad5d-f0889f79b7c3" (UID: "da85883d-cfc8-4e82-ad5d-f0889f79b7c3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.405171 5034 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.405215 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.405230 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qlj5\" (UniqueName: \"kubernetes.io/projected/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-kube-api-access-5qlj5\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.405245 5034 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.405257 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.405271 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.405283 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.405294 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da85883d-cfc8-4e82-ad5d-f0889f79b7c3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.696551 5034 generic.go:334] "Generic (PLEG): container finished" podID="da85883d-cfc8-4e82-ad5d-f0889f79b7c3" containerID="939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa" exitCode=0 Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.696618 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c695fbb7-pzj94" event={"ID":"da85883d-cfc8-4e82-ad5d-f0889f79b7c3","Type":"ContainerDied","Data":"939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa"} Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.696643 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c695fbb7-pzj94" event={"ID":"da85883d-cfc8-4e82-ad5d-f0889f79b7c3","Type":"ContainerDied","Data":"3f72919cbdf44d5836aacffa2571ee7182b5aa4f2cd50f413521ea9e9b0e2b31"} Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.696660 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c695fbb7-pzj94" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.696665 5034 scope.go:117] "RemoveContainer" containerID="939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.700599 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6576bc4c77-zzdbj" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.169:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.700701 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6576bc4c77-zzdbj" podUID="983e4ee8-36de-4b90-b18b-eed4db804a3d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.169:8080/healthcheck\": dial tcp 10.217.0.169:8080: i/o timeout" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.729612 5034 scope.go:117] "RemoveContainer" containerID="939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa" Jan 05 22:16:05 crc kubenswrapper[5034]: E0105 22:16:05.733039 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa\": container with ID starting with 939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa not found: ID does not exist" containerID="939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.733158 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa"} err="failed to get container status \"939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa\": rpc error: code = NotFound desc = could not find container \"939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa\": container with ID starting with 939449e56452e0b6d683bd48e7014fca834e157af83e9d9bcbd853fe6f868efa not found: ID does not exist" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.758453 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7c695fbb7-pzj94"] Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.771297 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7c695fbb7-pzj94"] Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.868211 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="033973ad-b5ce-4136-92d2-0a2b976324db" path="/var/lib/kubelet/pods/033973ad-b5ce-4136-92d2-0a2b976324db/volumes" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.868936 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" path="/var/lib/kubelet/pods/0623db6b-2e6a-4739-8c7f-ec9a98b51d93/volumes" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.870187 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44fc54fc-2187-4b43-8e20-e8c84b8f54d3" path="/var/lib/kubelet/pods/44fc54fc-2187-4b43-8e20-e8c84b8f54d3/volumes" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.871384 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52dac0d7-1025-49a8-8130-1f0d5050331c" path="/var/lib/kubelet/pods/52dac0d7-1025-49a8-8130-1f0d5050331c/volumes" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.872095 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" path="/var/lib/kubelet/pods/65a6b236-e04b-494a-a18e-5d1a8a5ae02a/volumes" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.872840 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" path="/var/lib/kubelet/pods/8c1a7050-af42-4822-9bcb-cc8ea32bd319/volumes" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.874296 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" path="/var/lib/kubelet/pods/94526d3f-1e21-4eef-abb7-5cd05bfb1670/volumes" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.875094 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da85883d-cfc8-4e82-ad5d-f0889f79b7c3" path="/var/lib/kubelet/pods/da85883d-cfc8-4e82-ad5d-f0889f79b7c3/volumes" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.875440 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6" path="/var/lib/kubelet/pods/e0a2fe7c-7ffe-4005-9dc5-b74ef2bc4bc6/volumes" Jan 05 22:16:05 crc kubenswrapper[5034]: I0105 22:16:05.876221 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" path="/var/lib/kubelet/pods/eda1f147-b2fb-4349-ba17-674073870a4b/volumes" Jan 05 22:16:06 crc kubenswrapper[5034]: I0105 22:16:06.727041 5034 generic.go:334] "Generic (PLEG): container finished" podID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerID="1d7b03a04a230b552aaf243bbc2885e5f698b8f260c1a4b1505ee39ac4fe636a" exitCode=0 Jan 05 22:16:06 crc kubenswrapper[5034]: I0105 22:16:06.727120 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71dab1f9-0430-4516-8eed-265cfd0c5be9","Type":"ContainerDied","Data":"1d7b03a04a230b552aaf243bbc2885e5f698b8f260c1a4b1505ee39ac4fe636a"} Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.031722 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.110181 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="5c91a26e-489c-40b8-bf4b-b60f65431df0" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.194:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.132562 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-sg-core-conf-yaml\") pod \"71dab1f9-0430-4516-8eed-265cfd0c5be9\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.132631 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-ceilometer-tls-certs\") pod \"71dab1f9-0430-4516-8eed-265cfd0c5be9\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.132670 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-log-httpd\") pod \"71dab1f9-0430-4516-8eed-265cfd0c5be9\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.132698 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7kq8\" (UniqueName: \"kubernetes.io/projected/71dab1f9-0430-4516-8eed-265cfd0c5be9-kube-api-access-t7kq8\") pod \"71dab1f9-0430-4516-8eed-265cfd0c5be9\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.132722 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-scripts\") pod \"71dab1f9-0430-4516-8eed-265cfd0c5be9\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.132750 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-run-httpd\") pod \"71dab1f9-0430-4516-8eed-265cfd0c5be9\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.132823 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-config-data\") pod \"71dab1f9-0430-4516-8eed-265cfd0c5be9\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.132852 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-combined-ca-bundle\") pod \"71dab1f9-0430-4516-8eed-265cfd0c5be9\" (UID: \"71dab1f9-0430-4516-8eed-265cfd0c5be9\") " Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.133714 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "71dab1f9-0430-4516-8eed-265cfd0c5be9" (UID: "71dab1f9-0430-4516-8eed-265cfd0c5be9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.133846 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "71dab1f9-0430-4516-8eed-265cfd0c5be9" (UID: "71dab1f9-0430-4516-8eed-265cfd0c5be9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.138187 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-scripts" (OuterVolumeSpecName: "scripts") pod "71dab1f9-0430-4516-8eed-265cfd0c5be9" (UID: "71dab1f9-0430-4516-8eed-265cfd0c5be9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.144862 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71dab1f9-0430-4516-8eed-265cfd0c5be9-kube-api-access-t7kq8" (OuterVolumeSpecName: "kube-api-access-t7kq8") pod "71dab1f9-0430-4516-8eed-265cfd0c5be9" (UID: "71dab1f9-0430-4516-8eed-265cfd0c5be9"). InnerVolumeSpecName "kube-api-access-t7kq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.162412 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "71dab1f9-0430-4516-8eed-265cfd0c5be9" (UID: "71dab1f9-0430-4516-8eed-265cfd0c5be9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.181183 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "71dab1f9-0430-4516-8eed-265cfd0c5be9" (UID: "71dab1f9-0430-4516-8eed-265cfd0c5be9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.207997 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71dab1f9-0430-4516-8eed-265cfd0c5be9" (UID: "71dab1f9-0430-4516-8eed-265cfd0c5be9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.214265 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-config-data" (OuterVolumeSpecName: "config-data") pod "71dab1f9-0430-4516-8eed-265cfd0c5be9" (UID: "71dab1f9-0430-4516-8eed-265cfd0c5be9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.235043 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.235074 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.235097 5034 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.235105 5034 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.235115 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.235123 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7kq8\" (UniqueName: \"kubernetes.io/projected/71dab1f9-0430-4516-8eed-265cfd0c5be9-kube-api-access-t7kq8\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.235131 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71dab1f9-0430-4516-8eed-265cfd0c5be9-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.235139 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71dab1f9-0430-4516-8eed-265cfd0c5be9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.742593 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71dab1f9-0430-4516-8eed-265cfd0c5be9","Type":"ContainerDied","Data":"5198f02be5d8ed373f011589bd807db2be8768560abb4c4829f1b65d46854e19"} Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.742714 5034 scope.go:117] "RemoveContainer" containerID="9e1dc00f3e2493bdf5a7760277688af58c81bbb233be802e5fd2fc891a6f89a3" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.742997 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.769976 5034 scope.go:117] "RemoveContainer" containerID="3137688b20d8b842cd8f9e85cf05a9851f287206fadca80a9fae4c2672d55f96" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.782680 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.795554 5034 scope.go:117] "RemoveContainer" containerID="1d7b03a04a230b552aaf243bbc2885e5f698b8f260c1a4b1505ee39ac4fe636a" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.796145 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.823995 5034 scope.go:117] "RemoveContainer" containerID="f1d141763da55b0e46082e6afd0859a1080cd136929e067948b616a2530eac31" Jan 05 22:16:07 crc kubenswrapper[5034]: I0105 22:16:07.872635 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" path="/var/lib/kubelet/pods/71dab1f9-0430-4516-8eed-265cfd0c5be9/volumes" Jan 05 22:16:09 crc kubenswrapper[5034]: E0105 22:16:09.652250 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:09 crc kubenswrapper[5034]: E0105 22:16:09.652657 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:09 crc kubenswrapper[5034]: E0105 22:16:09.652856 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:09 crc kubenswrapper[5034]: E0105 22:16:09.653028 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:09 crc kubenswrapper[5034]: E0105 22:16:09.653052 5034 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server" Jan 05 22:16:09 crc kubenswrapper[5034]: E0105 22:16:09.654058 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:09 crc kubenswrapper[5034]: E0105 22:16:09.654970 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:09 crc kubenswrapper[5034]: E0105 22:16:09.655001 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovs-vswitchd" Jan 05 22:16:13 crc kubenswrapper[5034]: I0105 22:16:13.811297 5034 generic.go:334] "Generic (PLEG): container finished" podID="5b457464-69a5-4e13-88a9-9e23250402d1" containerID="8a4ccd2cd507ddb6502cfdecb3eea7f0e3fcbcc526f6e6220ee67d322421fe39" exitCode=0 Jan 05 22:16:13 crc kubenswrapper[5034]: I0105 22:16:13.811401 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c458b9699-9b8w4" event={"ID":"5b457464-69a5-4e13-88a9-9e23250402d1","Type":"ContainerDied","Data":"8a4ccd2cd507ddb6502cfdecb3eea7f0e3fcbcc526f6e6220ee67d322421fe39"} Jan 05 22:16:13 crc kubenswrapper[5034]: I0105 22:16:13.888558 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.043441 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-httpd-config\") pod \"5b457464-69a5-4e13-88a9-9e23250402d1\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.043484 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-combined-ca-bundle\") pod \"5b457464-69a5-4e13-88a9-9e23250402d1\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.043537 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-config\") pod \"5b457464-69a5-4e13-88a9-9e23250402d1\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.043557 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-ovndb-tls-certs\") pod \"5b457464-69a5-4e13-88a9-9e23250402d1\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.043595 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-internal-tls-certs\") pod \"5b457464-69a5-4e13-88a9-9e23250402d1\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.043633 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9fvt\" (UniqueName: \"kubernetes.io/projected/5b457464-69a5-4e13-88a9-9e23250402d1-kube-api-access-z9fvt\") pod \"5b457464-69a5-4e13-88a9-9e23250402d1\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.043679 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-public-tls-certs\") pod \"5b457464-69a5-4e13-88a9-9e23250402d1\" (UID: \"5b457464-69a5-4e13-88a9-9e23250402d1\") " Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.050546 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5b457464-69a5-4e13-88a9-9e23250402d1" (UID: "5b457464-69a5-4e13-88a9-9e23250402d1"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.050591 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b457464-69a5-4e13-88a9-9e23250402d1-kube-api-access-z9fvt" (OuterVolumeSpecName: "kube-api-access-z9fvt") pod "5b457464-69a5-4e13-88a9-9e23250402d1" (UID: "5b457464-69a5-4e13-88a9-9e23250402d1"). InnerVolumeSpecName "kube-api-access-z9fvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.089050 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5b457464-69a5-4e13-88a9-9e23250402d1" (UID: "5b457464-69a5-4e13-88a9-9e23250402d1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.089668 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-config" (OuterVolumeSpecName: "config") pod "5b457464-69a5-4e13-88a9-9e23250402d1" (UID: "5b457464-69a5-4e13-88a9-9e23250402d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.094463 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b457464-69a5-4e13-88a9-9e23250402d1" (UID: "5b457464-69a5-4e13-88a9-9e23250402d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.108156 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5b457464-69a5-4e13-88a9-9e23250402d1" (UID: "5b457464-69a5-4e13-88a9-9e23250402d1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.122910 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5b457464-69a5-4e13-88a9-9e23250402d1" (UID: "5b457464-69a5-4e13-88a9-9e23250402d1"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.145719 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.145757 5034 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.145771 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.145780 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9fvt\" (UniqueName: \"kubernetes.io/projected/5b457464-69a5-4e13-88a9-9e23250402d1-kube-api-access-z9fvt\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.145791 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.145799 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.145810 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b457464-69a5-4e13-88a9-9e23250402d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:14 crc kubenswrapper[5034]: E0105 22:16:14.651434 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:14 crc kubenswrapper[5034]: E0105 22:16:14.652505 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:14 crc kubenswrapper[5034]: E0105 22:16:14.652865 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:14 crc kubenswrapper[5034]: E0105 22:16:14.652905 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:14 crc kubenswrapper[5034]: E0105 22:16:14.653226 5034 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server" Jan 05 22:16:14 crc kubenswrapper[5034]: E0105 22:16:14.655985 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:14 crc kubenswrapper[5034]: E0105 22:16:14.657559 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:14 crc kubenswrapper[5034]: E0105 22:16:14.657620 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovs-vswitchd" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.827008 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c458b9699-9b8w4" event={"ID":"5b457464-69a5-4e13-88a9-9e23250402d1","Type":"ContainerDied","Data":"cc37d5a7396e0bc234a7d7381ebe15ea3663d835e6596f174530e8996af4d4ec"} Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.827065 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c458b9699-9b8w4" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.827156 5034 scope.go:117] "RemoveContainer" containerID="3b923196a4d918a3fdfb27750f013d3bb48b93297dee98bc255d7c448bb47281" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.862904 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c458b9699-9b8w4"] Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.863505 5034 scope.go:117] "RemoveContainer" containerID="8a4ccd2cd507ddb6502cfdecb3eea7f0e3fcbcc526f6e6220ee67d322421fe39" Jan 05 22:16:14 crc kubenswrapper[5034]: I0105 22:16:14.871064 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c458b9699-9b8w4"] Jan 05 22:16:15 crc kubenswrapper[5034]: I0105 22:16:15.850592 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b457464-69a5-4e13-88a9-9e23250402d1" path="/var/lib/kubelet/pods/5b457464-69a5-4e13-88a9-9e23250402d1/volumes" Jan 05 22:16:19 crc kubenswrapper[5034]: E0105 22:16:19.651172 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:19 crc kubenswrapper[5034]: E0105 22:16:19.652905 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:19 crc kubenswrapper[5034]: E0105 22:16:19.655225 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:19 crc kubenswrapper[5034]: E0105 22:16:19.655829 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 05 22:16:19 crc kubenswrapper[5034]: E0105 22:16:19.655881 5034 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server" Jan 05 22:16:19 crc kubenswrapper[5034]: E0105 22:16:19.657057 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:19 crc kubenswrapper[5034]: E0105 22:16:19.659455 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 05 22:16:19 crc kubenswrapper[5034]: E0105 22:16:19.659507 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v4mvr" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovs-vswitchd" Jan 05 22:16:20 crc kubenswrapper[5034]: I0105 22:16:20.469001 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:16:20 crc kubenswrapper[5034]: I0105 22:16:20.469063 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:16:23 crc kubenswrapper[5034]: I0105 22:16:23.966878 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v4mvr_a4f67d51-b26b-44be-beba-ea5874fe6375/ovs-vswitchd/0.log" Jan 05 22:16:23 crc kubenswrapper[5034]: I0105 22:16:23.975388 5034 generic.go:334] "Generic (PLEG): container finished" podID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" exitCode=137 Jan 05 22:16:23 crc kubenswrapper[5034]: I0105 22:16:23.975497 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v4mvr" event={"ID":"a4f67d51-b26b-44be-beba-ea5874fe6375","Type":"ContainerDied","Data":"493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8"} Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.283348 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v4mvr_a4f67d51-b26b-44be-beba-ea5874fe6375/ovs-vswitchd/0.log" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.284384 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.434739 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-lib\") pod \"a4f67d51-b26b-44be-beba-ea5874fe6375\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.434952 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-lib" (OuterVolumeSpecName: "var-lib") pod "a4f67d51-b26b-44be-beba-ea5874fe6375" (UID: "a4f67d51-b26b-44be-beba-ea5874fe6375"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.435034 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f67d51-b26b-44be-beba-ea5874fe6375-scripts\") pod \"a4f67d51-b26b-44be-beba-ea5874fe6375\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.435198 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjb4l\" (UniqueName: \"kubernetes.io/projected/a4f67d51-b26b-44be-beba-ea5874fe6375-kube-api-access-wjb4l\") pod \"a4f67d51-b26b-44be-beba-ea5874fe6375\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.435266 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-run\") pod \"a4f67d51-b26b-44be-beba-ea5874fe6375\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.435378 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-log\") pod \"a4f67d51-b26b-44be-beba-ea5874fe6375\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.435438 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-log" (OuterVolumeSpecName: "var-log") pod "a4f67d51-b26b-44be-beba-ea5874fe6375" (UID: "a4f67d51-b26b-44be-beba-ea5874fe6375"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.435444 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-run" (OuterVolumeSpecName: "var-run") pod "a4f67d51-b26b-44be-beba-ea5874fe6375" (UID: "a4f67d51-b26b-44be-beba-ea5874fe6375"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.435505 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-etc-ovs\") pod \"a4f67d51-b26b-44be-beba-ea5874fe6375\" (UID: \"a4f67d51-b26b-44be-beba-ea5874fe6375\") " Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.435612 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "a4f67d51-b26b-44be-beba-ea5874fe6375" (UID: "a4f67d51-b26b-44be-beba-ea5874fe6375"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.436427 5034 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.436450 5034 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-lib\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.436459 5034 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-run\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.436470 5034 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a4f67d51-b26b-44be-beba-ea5874fe6375-var-log\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.436811 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f67d51-b26b-44be-beba-ea5874fe6375-scripts" (OuterVolumeSpecName: "scripts") pod "a4f67d51-b26b-44be-beba-ea5874fe6375" (UID: "a4f67d51-b26b-44be-beba-ea5874fe6375"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.442792 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f67d51-b26b-44be-beba-ea5874fe6375-kube-api-access-wjb4l" (OuterVolumeSpecName: "kube-api-access-wjb4l") pod "a4f67d51-b26b-44be-beba-ea5874fe6375" (UID: "a4f67d51-b26b-44be-beba-ea5874fe6375"). InnerVolumeSpecName "kube-api-access-wjb4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.538176 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f67d51-b26b-44be-beba-ea5874fe6375-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.538214 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjb4l\" (UniqueName: \"kubernetes.io/projected/a4f67d51-b26b-44be-beba-ea5874fe6375-kube-api-access-wjb4l\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.991717 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v4mvr_a4f67d51-b26b-44be-beba-ea5874fe6375/ovs-vswitchd/0.log" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.993171 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v4mvr" event={"ID":"a4f67d51-b26b-44be-beba-ea5874fe6375","Type":"ContainerDied","Data":"9c06874e396c32c6e21fbb89dd0184f544483db04ff7c516fb36ef304d6c5577"} Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.993230 5034 scope.go:117] "RemoveContainer" containerID="493279eebe9b9859c5cd2cd417305b4d610a5b7a5e180832383042f5cfca06d8" Jan 05 22:16:24 crc kubenswrapper[5034]: I0105 22:16:24.993351 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v4mvr" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.043167 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-v4mvr"] Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.045366 5034 scope.go:117] "RemoveContainer" containerID="578ee507996954703b7c3b16b2f901cf0a2f430a0da0b1399984ea84e135aefb" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.049507 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-v4mvr"] Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.080533 5034 scope.go:117] "RemoveContainer" containerID="0faf886a752e27ec56ab34de4590b1ff5b59d045df96fc71a7ef8c57630f88d1" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.487655 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.553136 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-cache\") pod \"4402dece-5e7d-41e8-87e3-54ca201e2c52\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.553182 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"4402dece-5e7d-41e8-87e3-54ca201e2c52\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.553233 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-lock\") pod \"4402dece-5e7d-41e8-87e3-54ca201e2c52\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.553298 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") pod \"4402dece-5e7d-41e8-87e3-54ca201e2c52\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.553317 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxddp\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-kube-api-access-kxddp\") pod \"4402dece-5e7d-41e8-87e3-54ca201e2c52\" (UID: \"4402dece-5e7d-41e8-87e3-54ca201e2c52\") " Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.553864 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-lock" (OuterVolumeSpecName: "lock") pod "4402dece-5e7d-41e8-87e3-54ca201e2c52" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.554126 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-cache" (OuterVolumeSpecName: "cache") pod "4402dece-5e7d-41e8-87e3-54ca201e2c52" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.557145 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-kube-api-access-kxddp" (OuterVolumeSpecName: "kube-api-access-kxddp") pod "4402dece-5e7d-41e8-87e3-54ca201e2c52" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52"). InnerVolumeSpecName "kube-api-access-kxddp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.557305 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4402dece-5e7d-41e8-87e3-54ca201e2c52" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.565359 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "swift") pod "4402dece-5e7d-41e8-87e3-54ca201e2c52" (UID: "4402dece-5e7d-41e8-87e3-54ca201e2c52"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.654497 5034 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.654531 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxddp\" (UniqueName: \"kubernetes.io/projected/4402dece-5e7d-41e8-87e3-54ca201e2c52-kube-api-access-kxddp\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.654541 5034 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-cache\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.654573 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.654584 5034 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4402dece-5e7d-41e8-87e3-54ca201e2c52-lock\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.668521 5034 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.756269 5034 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:25 crc kubenswrapper[5034]: I0105 22:16:25.858666 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" path="/var/lib/kubelet/pods/a4f67d51-b26b-44be-beba-ea5874fe6375/volumes" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.010895 5034 generic.go:334] "Generic (PLEG): container finished" podID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerID="9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c" exitCode=137 Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.011317 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c"} Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.011354 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4402dece-5e7d-41e8-87e3-54ca201e2c52","Type":"ContainerDied","Data":"a2ceaa2f7e504c25c0515bb4a8bb0ee2eb1a0402c04c8422117f5c4fd96a705e"} Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.011373 5034 scope.go:117] "RemoveContainer" containerID="9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.011795 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.016429 5034 generic.go:334] "Generic (PLEG): container finished" podID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" containerID="598559a378fd6ca644d7dbe7962a49bd4a282bb0608ed7a7db6dd7fff095ac06" exitCode=137 Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.016453 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3a3c79c1-b936-44a0-bca1-68f7d69d8fab","Type":"ContainerDied","Data":"598559a378fd6ca644d7dbe7962a49bd4a282bb0608ed7a7db6dd7fff095ac06"} Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.028447 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.037838 5034 scope.go:117] "RemoveContainer" containerID="c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.041519 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.046667 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.066069 5034 scope.go:117] "RemoveContainer" containerID="0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.088527 5034 scope.go:117] "RemoveContainer" containerID="037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.105528 5034 scope.go:117] "RemoveContainer" containerID="0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.123293 5034 scope.go:117] "RemoveContainer" containerID="b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.146782 5034 scope.go:117] "RemoveContainer" containerID="832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.165007 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-etc-machine-id\") pod \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.165132 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data\") pod \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.165137 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3a3c79c1-b936-44a0-bca1-68f7d69d8fab" (UID: "3a3c79c1-b936-44a0-bca1-68f7d69d8fab"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.165196 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data-custom\") pod \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.165219 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-scripts\") pod \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.165300 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c85m\" (UniqueName: \"kubernetes.io/projected/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-kube-api-access-8c85m\") pod \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.165331 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-combined-ca-bundle\") pod \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\" (UID: \"3a3c79c1-b936-44a0-bca1-68f7d69d8fab\") " Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.165585 5034 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.169232 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a3c79c1-b936-44a0-bca1-68f7d69d8fab" (UID: "3a3c79c1-b936-44a0-bca1-68f7d69d8fab"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.169570 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-scripts" (OuterVolumeSpecName: "scripts") pod "3a3c79c1-b936-44a0-bca1-68f7d69d8fab" (UID: "3a3c79c1-b936-44a0-bca1-68f7d69d8fab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.169825 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-kube-api-access-8c85m" (OuterVolumeSpecName: "kube-api-access-8c85m") pod "3a3c79c1-b936-44a0-bca1-68f7d69d8fab" (UID: "3a3c79c1-b936-44a0-bca1-68f7d69d8fab"). InnerVolumeSpecName "kube-api-access-8c85m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.219560 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a3c79c1-b936-44a0-bca1-68f7d69d8fab" (UID: "3a3c79c1-b936-44a0-bca1-68f7d69d8fab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.244848 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data" (OuterVolumeSpecName: "config-data") pod "3a3c79c1-b936-44a0-bca1-68f7d69d8fab" (UID: "3a3c79c1-b936-44a0-bca1-68f7d69d8fab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.266495 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c85m\" (UniqueName: \"kubernetes.io/projected/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-kube-api-access-8c85m\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.266529 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.266539 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.266548 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.266556 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a3c79c1-b936-44a0-bca1-68f7d69d8fab-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.296005 5034 scope.go:117] "RemoveContainer" containerID="18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.321551 5034 scope.go:117] "RemoveContainer" containerID="86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.339204 5034 scope.go:117] "RemoveContainer" containerID="41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.360653 5034 scope.go:117] "RemoveContainer" containerID="4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.379893 5034 scope.go:117] "RemoveContainer" containerID="f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.403481 5034 scope.go:117] "RemoveContainer" containerID="56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.433517 5034 scope.go:117] "RemoveContainer" containerID="94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.453589 5034 scope.go:117] "RemoveContainer" containerID="1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.474039 5034 scope.go:117] "RemoveContainer" containerID="9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.474627 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c\": container with ID starting with 9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c not found: ID does not exist" containerID="9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.474703 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c"} err="failed to get container status \"9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c\": rpc error: code = NotFound desc = could not find container \"9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c\": container with ID starting with 9189966b9ba77011b69b25f34d3f56c386caf75bba92762578ed04f214bc231c not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.474757 5034 scope.go:117] "RemoveContainer" containerID="c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.475491 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346\": container with ID starting with c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346 not found: ID does not exist" containerID="c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.475526 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346"} err="failed to get container status \"c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346\": rpc error: code = NotFound desc = could not find container \"c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346\": container with ID starting with c01be8eb40542747e4c839f23adcd57929632e68af871b5ca83af1274867c346 not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.475545 5034 scope.go:117] "RemoveContainer" containerID="0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.476116 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c\": container with ID starting with 0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c not found: ID does not exist" containerID="0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.476203 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c"} err="failed to get container status \"0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c\": rpc error: code = NotFound desc = could not find container \"0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c\": container with ID starting with 0dc906ac1214a85e6956290080ad4cdbe034702e4d9afe58f4ff0b3d2309e88c not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.476233 5034 scope.go:117] "RemoveContainer" containerID="037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.476591 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a\": container with ID starting with 037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a not found: ID does not exist" containerID="037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.476637 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a"} err="failed to get container status \"037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a\": rpc error: code = NotFound desc = could not find container \"037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a\": container with ID starting with 037bc3db43a28e414f975931a78bd46183bb57a68f01d69dcd5db8a06b10348a not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.476662 5034 scope.go:117] "RemoveContainer" containerID="0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.477052 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03\": container with ID starting with 0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03 not found: ID does not exist" containerID="0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.477137 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03"} err="failed to get container status \"0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03\": rpc error: code = NotFound desc = could not find container \"0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03\": container with ID starting with 0e857f146757009236cc01354b442e4c5cb81d11f5021cc46f33d76be3250f03 not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.477163 5034 scope.go:117] "RemoveContainer" containerID="b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.477504 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7\": container with ID starting with b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7 not found: ID does not exist" containerID="b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.477541 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7"} err="failed to get container status \"b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7\": rpc error: code = NotFound desc = could not find container \"b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7\": container with ID starting with b49408e861f817f9cd9b6ae5ebbadf49e0dccc1c7101e12d837a667bdfffdaa7 not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.477565 5034 scope.go:117] "RemoveContainer" containerID="832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.478382 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed\": container with ID starting with 832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed not found: ID does not exist" containerID="832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.478422 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed"} err="failed to get container status \"832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed\": rpc error: code = NotFound desc = could not find container \"832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed\": container with ID starting with 832fe7b67db4f25a291faba41007157af2b28d680cd791584693e687e78053ed not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.478449 5034 scope.go:117] "RemoveContainer" containerID="18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.478786 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa\": container with ID starting with 18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa not found: ID does not exist" containerID="18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.478825 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa"} err="failed to get container status \"18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa\": rpc error: code = NotFound desc = could not find container \"18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa\": container with ID starting with 18c235b051c6a3328a287a9dfe1e1cccb1607b1391db901500bd8a57684744fa not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.478848 5034 scope.go:117] "RemoveContainer" containerID="86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.479166 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03\": container with ID starting with 86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03 not found: ID does not exist" containerID="86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.479204 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03"} err="failed to get container status \"86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03\": rpc error: code = NotFound desc = could not find container \"86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03\": container with ID starting with 86feab262326dd59337ac25d35c79f9e667b2cad2fdb1eb850480772f491ef03 not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.479228 5034 scope.go:117] "RemoveContainer" containerID="41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.479529 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4\": container with ID starting with 41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4 not found: ID does not exist" containerID="41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.479560 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4"} err="failed to get container status \"41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4\": rpc error: code = NotFound desc = could not find container \"41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4\": container with ID starting with 41d36ce1c83172c0831a91fe9a0540e0cbda5c051ab771597f6aa270cc63b4a4 not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.479585 5034 scope.go:117] "RemoveContainer" containerID="4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.479890 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542\": container with ID starting with 4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542 not found: ID does not exist" containerID="4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.479927 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542"} err="failed to get container status \"4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542\": rpc error: code = NotFound desc = could not find container \"4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542\": container with ID starting with 4ecb933ba6fccddf364b6189db102356e573585379bdad7961115c4efed07542 not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.479951 5034 scope.go:117] "RemoveContainer" containerID="f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.480463 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9\": container with ID starting with f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9 not found: ID does not exist" containerID="f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.480538 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9"} err="failed to get container status \"f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9\": rpc error: code = NotFound desc = could not find container \"f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9\": container with ID starting with f3e85ea02fa6e397751b03d43238a63ff9e82df62295233feeb7ab66cf61d0b9 not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.480564 5034 scope.go:117] "RemoveContainer" containerID="56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.480917 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac\": container with ID starting with 56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac not found: ID does not exist" containerID="56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.480950 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac"} err="failed to get container status \"56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac\": rpc error: code = NotFound desc = could not find container \"56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac\": container with ID starting with 56fa104b003644495d123cd4dde64188d9052ba000095a83d71e34a326e6aeac not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.480970 5034 scope.go:117] "RemoveContainer" containerID="94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.481372 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370\": container with ID starting with 94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370 not found: ID does not exist" containerID="94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.481402 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370"} err="failed to get container status \"94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370\": rpc error: code = NotFound desc = could not find container \"94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370\": container with ID starting with 94396c820b7dd48e05453f776e57b2db813b836789782b16abe21016777a8370 not found: ID does not exist" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.481422 5034 scope.go:117] "RemoveContainer" containerID="1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08" Jan 05 22:16:26 crc kubenswrapper[5034]: E0105 22:16:26.481754 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08\": container with ID starting with 1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08 not found: ID does not exist" containerID="1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08" Jan 05 22:16:26 crc kubenswrapper[5034]: I0105 22:16:26.481797 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08"} err="failed to get container status \"1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08\": rpc error: code = NotFound desc = could not find container \"1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08\": container with ID starting with 1185d1f04240e6144c60927c04f0f29e36511e4401a70c5610b45df69cf96d08 not found: ID does not exist" Jan 05 22:16:27 crc kubenswrapper[5034]: I0105 22:16:27.029420 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 22:16:27 crc kubenswrapper[5034]: I0105 22:16:27.029435 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3a3c79c1-b936-44a0-bca1-68f7d69d8fab","Type":"ContainerDied","Data":"3e2f2090471daafe5cf6771374c241607afa8b75057bd103cc16270ea95e6abd"} Jan 05 22:16:27 crc kubenswrapper[5034]: I0105 22:16:27.029489 5034 scope.go:117] "RemoveContainer" containerID="72960a55513e9ecef8e41d52208d6be80b39479282ae6e2f1bc413cfa48dcc2f" Jan 05 22:16:27 crc kubenswrapper[5034]: I0105 22:16:27.068483 5034 scope.go:117] "RemoveContainer" containerID="598559a378fd6ca644d7dbe7962a49bd4a282bb0608ed7a7db6dd7fff095ac06" Jan 05 22:16:27 crc kubenswrapper[5034]: I0105 22:16:27.069252 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:16:27 crc kubenswrapper[5034]: I0105 22:16:27.078184 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 22:16:27 crc kubenswrapper[5034]: I0105 22:16:27.869437 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" path="/var/lib/kubelet/pods/3a3c79c1-b936-44a0-bca1-68f7d69d8fab/volumes" Jan 05 22:16:27 crc kubenswrapper[5034]: I0105 22:16:27.870396 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" path="/var/lib/kubelet/pods/4402dece-5e7d-41e8-87e3-54ca201e2c52/volumes" Jan 05 22:16:50 crc kubenswrapper[5034]: I0105 22:16:50.468951 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:16:50 crc kubenswrapper[5034]: I0105 22:16:50.469994 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:16:50 crc kubenswrapper[5034]: I0105 22:16:50.470084 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:16:50 crc kubenswrapper[5034]: I0105 22:16:50.471198 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:16:50 crc kubenswrapper[5034]: I0105 22:16:50.471285 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" gracePeriod=600 Jan 05 22:16:50 crc kubenswrapper[5034]: E0105 22:16:50.597667 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:16:51 crc kubenswrapper[5034]: I0105 22:16:51.271051 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" exitCode=0 Jan 05 22:16:51 crc kubenswrapper[5034]: I0105 22:16:51.271122 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f"} Jan 05 22:16:51 crc kubenswrapper[5034]: I0105 22:16:51.271174 5034 scope.go:117] "RemoveContainer" containerID="45da2bec73ffc166cc700c72e797b90c9621bfbc99e0234553fa898f473409e8" Jan 05 22:16:51 crc kubenswrapper[5034]: I0105 22:16:51.273088 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:16:51 crc kubenswrapper[5034]: E0105 22:16:51.273531 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:16:54 crc kubenswrapper[5034]: I0105 22:16:54.600949 5034 scope.go:117] "RemoveContainer" containerID="a9b0af71996b2f7b5cfc0164a2338f465cc5484f2c68ff42352cd8642afd9b56" Jan 05 22:17:03 crc kubenswrapper[5034]: I0105 22:17:03.838459 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:17:03 crc kubenswrapper[5034]: E0105 22:17:03.839777 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:17:18 crc kubenswrapper[5034]: I0105 22:17:18.838605 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:17:18 crc kubenswrapper[5034]: E0105 22:17:18.840578 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:17:33 crc kubenswrapper[5034]: I0105 22:17:33.838804 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:17:33 crc kubenswrapper[5034]: E0105 22:17:33.839702 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:17:47 crc kubenswrapper[5034]: I0105 22:17:47.843611 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:17:47 crc kubenswrapper[5034]: E0105 22:17:47.844554 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.159045 5034 scope.go:117] "RemoveContainer" containerID="7a0186b1e44ac8134f9fe51361b5dc44c3e8bf3da775b6eddb81ba01ec8b492b" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.181793 5034 scope.go:117] "RemoveContainer" containerID="2950e5f7f98d9c07a5939a1bb838fe143b1d2fc7594ef95e69cabd81559d8245" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.232185 5034 scope.go:117] "RemoveContainer" containerID="823af9d38e6968a71bc565ca14a779898c0c232faee726e3066749a4a2b5963c" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.266749 5034 scope.go:117] "RemoveContainer" containerID="6142e99eab6f8d5fa2aa4392f035c3a6396193c921db5594487e88a07ec633b0" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.296023 5034 scope.go:117] "RemoveContainer" containerID="2a5ae0b4db2a15a1ed057e63af73bcd1c1a7cffc2bb0ddc0f3dbc39f84046c12" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.318986 5034 scope.go:117] "RemoveContainer" containerID="77ee3211f16dd4c1bd18b9108069d4dd9647d3634a5906976fc4a938a4bf9f37" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.342822 5034 scope.go:117] "RemoveContainer" containerID="a99fecf8f29d0a8e970efa45fd27a60018d29e40f8d02a4683a436301044a188" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.366576 5034 scope.go:117] "RemoveContainer" containerID="407d8a05bffd4abe9ad082589bcc9ea3f018e3dbb52b094de90c3bdb95dd7f60" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.383140 5034 scope.go:117] "RemoveContainer" containerID="fe32f24daade07e4bffe647a0b7d4e77ff03fd015a4876330d971f0844a9fc2b" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.397840 5034 scope.go:117] "RemoveContainer" containerID="86cd8648ad710237e2c737321d8579ef123c2c5c6943eff3a278dc4f5216f2de" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.414225 5034 scope.go:117] "RemoveContainer" containerID="c53666cb18451ea45ec78f56963d247d0b365d45c797f66511f8ec7d56a3c013" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.445548 5034 scope.go:117] "RemoveContainer" containerID="4a876f6ca118f3044d36bf8081d2cee6ca90bf93e157fd01110cb38a9db2b531" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.464692 5034 scope.go:117] "RemoveContainer" containerID="310342d4c4d8aee71861635fa5846ee0d54bd1dcaa6418240151b58f474c18f6" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.484851 5034 scope.go:117] "RemoveContainer" containerID="6f134b9fa65defccabde82e84c6e4dc250a78dc9edb97b12d8ca3a8e9f5e1687" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.501671 5034 scope.go:117] "RemoveContainer" containerID="ac3c8e4758503e92475e2d26a4c03fea79c0c64b762eb51bc37aa6cb46466081" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.518314 5034 scope.go:117] "RemoveContainer" containerID="bfb7b48fb39b141b2930622dfff5f754765edc64fa5517e3d7f0bc67c49e0300" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.538391 5034 scope.go:117] "RemoveContainer" containerID="b249c795882794edb5ec5acb2049718d190e4b644203126c284a968743e89077" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.562482 5034 scope.go:117] "RemoveContainer" containerID="14524d7362d1054c38b2b70a84f2c2ca21a579f59ba9729100777de4ce174f2a" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.586152 5034 scope.go:117] "RemoveContainer" containerID="7830319424df571955a459862ab6f01845556f9ddae2f57038e7cc18d8f7d278" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.610036 5034 scope.go:117] "RemoveContainer" containerID="f39e48f2802b2ac2efe4275c7ddb8cb739d20126bfdde36345bd13a09f2eae54" Jan 05 22:17:55 crc kubenswrapper[5034]: I0105 22:17:55.625992 5034 scope.go:117] "RemoveContainer" containerID="9dfbea345804939d23848ac8f2b0cb84a5be4b7898cb0b1bd0d64e7e30972b18" Jan 05 22:17:59 crc kubenswrapper[5034]: I0105 22:17:59.838228 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:17:59 crc kubenswrapper[5034]: E0105 22:17:59.838820 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:18:12 crc kubenswrapper[5034]: I0105 22:18:12.839337 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:18:12 crc kubenswrapper[5034]: E0105 22:18:12.840403 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:18:25 crc kubenswrapper[5034]: I0105 22:18:25.838886 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:18:25 crc kubenswrapper[5034]: E0105 22:18:25.839405 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:18:36 crc kubenswrapper[5034]: I0105 22:18:36.838950 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:18:36 crc kubenswrapper[5034]: E0105 22:18:36.839668 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:18:48 crc kubenswrapper[5034]: I0105 22:18:48.837911 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:18:48 crc kubenswrapper[5034]: E0105 22:18:48.838686 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:18:55 crc kubenswrapper[5034]: I0105 22:18:55.880595 5034 scope.go:117] "RemoveContainer" containerID="55be1eb42c37b2573335e58c1951f5284c2225b3c9d54424c5c2e52359938f95" Jan 05 22:18:55 crc kubenswrapper[5034]: I0105 22:18:55.928815 5034 scope.go:117] "RemoveContainer" containerID="dc5aa9ed739ff271d86828c2e7dee17c602d4a46a4953c1b82d2563653b1a4e5" Jan 05 22:18:55 crc kubenswrapper[5034]: I0105 22:18:55.970597 5034 scope.go:117] "RemoveContainer" containerID="8dfe9432e0a3f3a5fcfa99b3016481d77d604ec3574ab04670586251b9e6233d" Jan 05 22:18:55 crc kubenswrapper[5034]: I0105 22:18:55.996351 5034 scope.go:117] "RemoveContainer" containerID="3f25f80763e2de95eb07d3e84abe961949c5c4da5f365db39eff6df0d608f658" Jan 05 22:18:56 crc kubenswrapper[5034]: I0105 22:18:56.037650 5034 scope.go:117] "RemoveContainer" containerID="3c442e4078594f8e149be7d1516488eb0dc3ab8b56a67057b3cba6e43abc37dd" Jan 05 22:18:56 crc kubenswrapper[5034]: I0105 22:18:56.085855 5034 scope.go:117] "RemoveContainer" containerID="7fb9caeb268b5b9cc7ce222b9523d083a7e5465a538b694b7fb7458fdc86ec5d" Jan 05 22:18:56 crc kubenswrapper[5034]: I0105 22:18:56.117998 5034 scope.go:117] "RemoveContainer" containerID="4005af6d05c0008f2863e7bac1801f0fa804e65b89c2cccb889aee58c098d158" Jan 05 22:18:56 crc kubenswrapper[5034]: I0105 22:18:56.159189 5034 scope.go:117] "RemoveContainer" containerID="b57af3a96d64d5830a8e96cc8238b65c1caa9a61ff64a604f65a7d792403e5f5" Jan 05 22:18:56 crc kubenswrapper[5034]: I0105 22:18:56.200776 5034 scope.go:117] "RemoveContainer" containerID="1f543abb49aaf470b43131d8884c14c08720cf8fb15c21caa315c8791607b026" Jan 05 22:18:59 crc kubenswrapper[5034]: I0105 22:18:59.839744 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:18:59 crc kubenswrapper[5034]: E0105 22:18:59.840694 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:19:12 crc kubenswrapper[5034]: I0105 22:19:12.839213 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:19:12 crc kubenswrapper[5034]: E0105 22:19:12.840014 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:19:25 crc kubenswrapper[5034]: I0105 22:19:25.838542 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:19:25 crc kubenswrapper[5034]: E0105 22:19:25.839392 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:19:36 crc kubenswrapper[5034]: I0105 22:19:36.839528 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:19:36 crc kubenswrapper[5034]: E0105 22:19:36.840970 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:19:49 crc kubenswrapper[5034]: I0105 22:19:49.838326 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:19:49 crc kubenswrapper[5034]: E0105 22:19:49.840032 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:19:56 crc kubenswrapper[5034]: I0105 22:19:56.359167 5034 scope.go:117] "RemoveContainer" containerID="4561867d9728d69a02d54c63e163f903bf49385633e98dca91da8fe50f6af5ae" Jan 05 22:19:56 crc kubenswrapper[5034]: I0105 22:19:56.406951 5034 scope.go:117] "RemoveContainer" containerID="a3a6a247d8e9e0c75348093c58cfb010339ed18d23fc59c36db9fe64e0087352" Jan 05 22:19:56 crc kubenswrapper[5034]: I0105 22:19:56.443406 5034 scope.go:117] "RemoveContainer" containerID="0b0da96279521b01b3d67ec6905df324227804077a59066455d25dd323c8ca4c" Jan 05 22:19:56 crc kubenswrapper[5034]: I0105 22:19:56.476577 5034 scope.go:117] "RemoveContainer" containerID="3b497418fb4fddbdc3e540084f3bebc4162bfd69a04333747c2d1bc3236fe875" Jan 05 22:19:56 crc kubenswrapper[5034]: I0105 22:19:56.502319 5034 scope.go:117] "RemoveContainer" containerID="5b6124b6c892a751bebde222de476c2c34313e4e4765b0115b760fb0f0795d43" Jan 05 22:19:56 crc kubenswrapper[5034]: I0105 22:19:56.525629 5034 scope.go:117] "RemoveContainer" containerID="c88a64be53b91c67e8e90903134af8ffc9d01789d94fcb6053bef72cb09a6760" Jan 05 22:19:56 crc kubenswrapper[5034]: I0105 22:19:56.595528 5034 scope.go:117] "RemoveContainer" containerID="bcb55346a2dd6fa4401dc3b99af08e3b463b472ebc05f09626fc4070fce1d44f" Jan 05 22:20:03 crc kubenswrapper[5034]: I0105 22:20:03.838569 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:20:03 crc kubenswrapper[5034]: E0105 22:20:03.839381 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:20:15 crc kubenswrapper[5034]: I0105 22:20:15.838271 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:20:15 crc kubenswrapper[5034]: E0105 22:20:15.838843 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:20:29 crc kubenswrapper[5034]: I0105 22:20:29.838985 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:20:29 crc kubenswrapper[5034]: E0105 22:20:29.839702 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:20:40 crc kubenswrapper[5034]: I0105 22:20:40.839215 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:20:40 crc kubenswrapper[5034]: E0105 22:20:40.840498 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:20:54 crc kubenswrapper[5034]: I0105 22:20:54.838481 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:20:54 crc kubenswrapper[5034]: E0105 22:20:54.839262 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:20:56 crc kubenswrapper[5034]: I0105 22:20:56.720616 5034 scope.go:117] "RemoveContainer" containerID="4203de1c272e07322c3cb2fb23ab8e191fcd3c0e7992de217064a0186d50eb84" Jan 05 22:20:56 crc kubenswrapper[5034]: I0105 22:20:56.761102 5034 scope.go:117] "RemoveContainer" containerID="019162e89465dcdc97538c62463c0686f10dc17589bc7886a23b68530f20cafc" Jan 05 22:20:56 crc kubenswrapper[5034]: I0105 22:20:56.799708 5034 scope.go:117] "RemoveContainer" containerID="02eccd3dc4c2053f38511d134bd4c631260cba004f866c32569a8710aa27ac3d" Jan 05 22:21:05 crc kubenswrapper[5034]: I0105 22:21:05.838334 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:21:05 crc kubenswrapper[5034]: E0105 22:21:05.839519 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:21:19 crc kubenswrapper[5034]: I0105 22:21:19.838736 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:21:19 crc kubenswrapper[5034]: E0105 22:21:19.839459 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:21:30 crc kubenswrapper[5034]: I0105 22:21:30.839653 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:21:30 crc kubenswrapper[5034]: E0105 22:21:30.840549 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:21:41 crc kubenswrapper[5034]: I0105 22:21:41.838700 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:21:41 crc kubenswrapper[5034]: E0105 22:21:41.839473 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:21:53 crc kubenswrapper[5034]: I0105 22:21:53.839957 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:21:54 crc kubenswrapper[5034]: I0105 22:21:54.817465 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"d80225dd0b406f15097d94022cacc358d9a138eb17e7966f964e37fdca8b2d73"} Jan 05 22:21:56 crc kubenswrapper[5034]: I0105 22:21:56.881587 5034 scope.go:117] "RemoveContainer" containerID="3a57024fa88c30e7a57854636d86d3229e9fe60d4e42969754544f5388f31269" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.797727 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n28kh"] Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.798777 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033973ad-b5ce-4136-92d2-0a2b976324db" containerName="nova-scheduler-scheduler" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.798795 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="033973ad-b5ce-4136-92d2-0a2b976324db" containerName="nova-scheduler-scheduler" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.798817 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" containerName="rabbitmq" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.798824 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" containerName="rabbitmq" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.798833 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="rsync" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.798841 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="rsync" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.798880 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server-init" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.798968 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server-init" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.798983 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-auditor" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.798991 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-auditor" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799000 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" containerName="setup-container" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799007 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" containerName="setup-container" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799020 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="ceilometer-central-agent" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799029 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="ceilometer-central-agent" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799044 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da85883d-cfc8-4e82-ad5d-f0889f79b7c3" containerName="keystone-api" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799052 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="da85883d-cfc8-4e82-ad5d-f0889f79b7c3" containerName="keystone-api" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799067 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="ceilometer-notification-agent" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799134 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="ceilometer-notification-agent" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799150 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" containerName="probe" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799159 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" containerName="probe" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799169 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="swift-recon-cron" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799176 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="swift-recon-cron" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799186 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerName="mysql-bootstrap" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799192 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerName="mysql-bootstrap" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799206 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44fc54fc-2187-4b43-8e20-e8c84b8f54d3" containerName="memcached" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799214 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="44fc54fc-2187-4b43-8e20-e8c84b8f54d3" containerName="memcached" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799225 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="proxy-httpd" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799234 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="proxy-httpd" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799251 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerName="galera" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799259 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerName="galera" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799269 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" containerName="openstack-network-exporter" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799277 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" containerName="openstack-network-exporter" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799292 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" containerName="mariadb-account-create-update" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799299 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" containerName="mariadb-account-create-update" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799313 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c91a26e-489c-40b8-bf4b-b60f65431df0" containerName="kube-state-metrics" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799321 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c91a26e-489c-40b8-bf4b-b60f65431df0" containerName="kube-state-metrics" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799331 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovs-vswitchd" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799339 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovs-vswitchd" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799348 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" containerName="cinder-scheduler" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799355 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" containerName="cinder-scheduler" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799369 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-expirer" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799377 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-expirer" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799388 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-auditor" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799396 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-auditor" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799408 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="sg-core" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799415 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="sg-core" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799423 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-server" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799429 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-server" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799442 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" containerName="rabbitmq" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799453 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" containerName="rabbitmq" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799464 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-server" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799472 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-server" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799482 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-reaper" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799488 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-reaper" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799498 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-replicator" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799504 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-replicator" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799512 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-auditor" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799519 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-auditor" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799528 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" containerName="setup-container" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799534 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" containerName="setup-container" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799542 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b457464-69a5-4e13-88a9-9e23250402d1" containerName="neutron-api" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799549 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b457464-69a5-4e13-88a9-9e23250402d1" containerName="neutron-api" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799556 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dac0d7-1025-49a8-8130-1f0d5050331c" containerName="nova-cell0-conductor-conductor" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799562 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dac0d7-1025-49a8-8130-1f0d5050331c" containerName="nova-cell0-conductor-conductor" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799569 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" containerName="mariadb-account-create-update" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799575 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" containerName="mariadb-account-create-update" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799585 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b457464-69a5-4e13-88a9-9e23250402d1" containerName="neutron-httpd" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799591 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b457464-69a5-4e13-88a9-9e23250402d1" containerName="neutron-httpd" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799601 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799607 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799615 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-updater" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799621 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-updater" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799630 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-replicator" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799637 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-replicator" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799647 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-updater" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799653 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-updater" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799664 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" containerName="ovn-northd" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799671 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" containerName="ovn-northd" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799680 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-server" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799688 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-server" Jan 05 22:22:46 crc kubenswrapper[5034]: E0105 22:22:46.799694 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-replicator" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799700 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-replicator" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799832 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" containerName="cinder-scheduler" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799845 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-replicator" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799861 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="rsync" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799871 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovs-vswitchd" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799879 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-server" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799889 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-auditor" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799900 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="ceilometer-notification-agent" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799912 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="033973ad-b5ce-4136-92d2-0a2b976324db" containerName="nova-scheduler-scheduler" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799921 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-server" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799928 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c91a26e-489c-40b8-bf4b-b60f65431df0" containerName="kube-state-metrics" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799936 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b457464-69a5-4e13-88a9-9e23250402d1" containerName="neutron-httpd" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799945 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-updater" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799953 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-reaper" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799966 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="52dac0d7-1025-49a8-8130-1f0d5050331c" containerName="nova-cell0-conductor-conductor" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799977 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f67d51-b26b-44be-beba-ea5874fe6375" containerName="ovsdb-server" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799986 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" containerName="ovn-northd" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.799994 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1a7050-af42-4822-9bcb-cc8ea32bd319" containerName="galera" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800004 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="44fc54fc-2187-4b43-8e20-e8c84b8f54d3" containerName="memcached" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800014 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3c79c1-b936-44a0-bca1-68f7d69d8fab" containerName="probe" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800023 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="proxy-httpd" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800032 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-auditor" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800041 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="94526d3f-1e21-4eef-abb7-5cd05bfb1670" containerName="rabbitmq" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800051 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="sg-core" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800062 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-replicator" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800102 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a6b236-e04b-494a-a18e-5d1a8a5ae02a" containerName="rabbitmq" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800113 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="da85883d-cfc8-4e82-ad5d-f0889f79b7c3" containerName="keystone-api" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800125 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="71dab1f9-0430-4516-8eed-265cfd0c5be9" containerName="ceilometer-central-agent" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800137 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-server" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800146 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="container-replicator" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800155 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" containerName="mariadb-account-create-update" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800164 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-expirer" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800175 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b457464-69a5-4e13-88a9-9e23250402d1" containerName="neutron-api" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800185 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda1f147-b2fb-4349-ba17-674073870a4b" containerName="openstack-network-exporter" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800194 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="swift-recon-cron" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800205 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="account-auditor" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800231 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4402dece-5e7d-41e8-87e3-54ca201e2c52" containerName="object-updater" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.800585 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0623db6b-2e6a-4739-8c7f-ec9a98b51d93" containerName="mariadb-account-create-update" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.801598 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.820260 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n28kh"] Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.937548 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6npt8\" (UniqueName: \"kubernetes.io/projected/2b279e81-e2fb-43bc-bc13-2f7b750073f9-kube-api-access-6npt8\") pod \"community-operators-n28kh\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.937667 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-catalog-content\") pod \"community-operators-n28kh\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:46 crc kubenswrapper[5034]: I0105 22:22:46.937953 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-utilities\") pod \"community-operators-n28kh\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:47 crc kubenswrapper[5034]: I0105 22:22:47.039924 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6npt8\" (UniqueName: \"kubernetes.io/projected/2b279e81-e2fb-43bc-bc13-2f7b750073f9-kube-api-access-6npt8\") pod \"community-operators-n28kh\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:47 crc kubenswrapper[5034]: I0105 22:22:47.040009 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-catalog-content\") pod \"community-operators-n28kh\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:47 crc kubenswrapper[5034]: I0105 22:22:47.040068 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-utilities\") pod \"community-operators-n28kh\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:47 crc kubenswrapper[5034]: I0105 22:22:47.040579 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-catalog-content\") pod \"community-operators-n28kh\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:47 crc kubenswrapper[5034]: I0105 22:22:47.040678 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-utilities\") pod \"community-operators-n28kh\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:47 crc kubenswrapper[5034]: I0105 22:22:47.061192 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6npt8\" (UniqueName: \"kubernetes.io/projected/2b279e81-e2fb-43bc-bc13-2f7b750073f9-kube-api-access-6npt8\") pod \"community-operators-n28kh\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:47 crc kubenswrapper[5034]: I0105 22:22:47.135361 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:47 crc kubenswrapper[5034]: I0105 22:22:47.619339 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n28kh"] Jan 05 22:22:48 crc kubenswrapper[5034]: I0105 22:22:48.223305 5034 generic.go:334] "Generic (PLEG): container finished" podID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerID="291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10" exitCode=0 Jan 05 22:22:48 crc kubenswrapper[5034]: I0105 22:22:48.223365 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n28kh" event={"ID":"2b279e81-e2fb-43bc-bc13-2f7b750073f9","Type":"ContainerDied","Data":"291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10"} Jan 05 22:22:48 crc kubenswrapper[5034]: I0105 22:22:48.223703 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n28kh" event={"ID":"2b279e81-e2fb-43bc-bc13-2f7b750073f9","Type":"ContainerStarted","Data":"aed2a2659348f9cfc5cdd5ffb07d49ea03634d1ce2ac886387ad5f418c78c93a"} Jan 05 22:22:48 crc kubenswrapper[5034]: I0105 22:22:48.225936 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.197308 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sgfwx"] Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.199096 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.217723 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgfwx"] Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.269817 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-catalog-content\") pod \"redhat-marketplace-sgfwx\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.269863 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qql7n\" (UniqueName: \"kubernetes.io/projected/3584e6ad-d8e6-4f94-917e-fb321d6938e9-kube-api-access-qql7n\") pod \"redhat-marketplace-sgfwx\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.270053 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-utilities\") pod \"redhat-marketplace-sgfwx\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.371812 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-catalog-content\") pod \"redhat-marketplace-sgfwx\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.371862 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qql7n\" (UniqueName: \"kubernetes.io/projected/3584e6ad-d8e6-4f94-917e-fb321d6938e9-kube-api-access-qql7n\") pod \"redhat-marketplace-sgfwx\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.371903 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-utilities\") pod \"redhat-marketplace-sgfwx\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.372336 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-catalog-content\") pod \"redhat-marketplace-sgfwx\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.372381 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-utilities\") pod \"redhat-marketplace-sgfwx\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.393936 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qql7n\" (UniqueName: \"kubernetes.io/projected/3584e6ad-d8e6-4f94-917e-fb321d6938e9-kube-api-access-qql7n\") pod \"redhat-marketplace-sgfwx\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.528480 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.808279 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-brpjz"] Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.810994 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.823159 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brpjz"] Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.891289 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz2z4\" (UniqueName: \"kubernetes.io/projected/74183d4d-2bcf-4e23-823c-d1c9fa205f11-kube-api-access-tz2z4\") pod \"redhat-operators-brpjz\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.891366 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-catalog-content\") pod \"redhat-operators-brpjz\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.891400 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-utilities\") pod \"redhat-operators-brpjz\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.947221 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgfwx"] Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.992669 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz2z4\" (UniqueName: \"kubernetes.io/projected/74183d4d-2bcf-4e23-823c-d1c9fa205f11-kube-api-access-tz2z4\") pod \"redhat-operators-brpjz\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.993048 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-catalog-content\") pod \"redhat-operators-brpjz\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.993121 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-utilities\") pod \"redhat-operators-brpjz\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.993443 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-catalog-content\") pod \"redhat-operators-brpjz\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:49 crc kubenswrapper[5034]: I0105 22:22:49.993748 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-utilities\") pod \"redhat-operators-brpjz\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:50 crc kubenswrapper[5034]: I0105 22:22:50.013984 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz2z4\" (UniqueName: \"kubernetes.io/projected/74183d4d-2bcf-4e23-823c-d1c9fa205f11-kube-api-access-tz2z4\") pod \"redhat-operators-brpjz\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:50 crc kubenswrapper[5034]: I0105 22:22:50.138027 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:22:50 crc kubenswrapper[5034]: I0105 22:22:50.242265 5034 generic.go:334] "Generic (PLEG): container finished" podID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerID="8e5d7cbc01af02dab692a35857476b8be917f10281ef0d431b014ce71360e4b4" exitCode=0 Jan 05 22:22:50 crc kubenswrapper[5034]: I0105 22:22:50.242304 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgfwx" event={"ID":"3584e6ad-d8e6-4f94-917e-fb321d6938e9","Type":"ContainerDied","Data":"8e5d7cbc01af02dab692a35857476b8be917f10281ef0d431b014ce71360e4b4"} Jan 05 22:22:50 crc kubenswrapper[5034]: I0105 22:22:50.242375 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgfwx" event={"ID":"3584e6ad-d8e6-4f94-917e-fb321d6938e9","Type":"ContainerStarted","Data":"d5178d68a8c8521022646e591fbaa899a6d961b654c07e0426e7b759c89f7ff2"} Jan 05 22:22:50 crc kubenswrapper[5034]: I0105 22:22:50.249820 5034 generic.go:334] "Generic (PLEG): container finished" podID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerID="fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3" exitCode=0 Jan 05 22:22:50 crc kubenswrapper[5034]: I0105 22:22:50.249853 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n28kh" event={"ID":"2b279e81-e2fb-43bc-bc13-2f7b750073f9","Type":"ContainerDied","Data":"fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3"} Jan 05 22:22:50 crc kubenswrapper[5034]: I0105 22:22:50.620538 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brpjz"] Jan 05 22:22:51 crc kubenswrapper[5034]: I0105 22:22:51.258224 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgfwx" event={"ID":"3584e6ad-d8e6-4f94-917e-fb321d6938e9","Type":"ContainerStarted","Data":"5d6740f4190f0e4353520e649da914f837cc469cc62907346dc2aa7dc00e8015"} Jan 05 22:22:51 crc kubenswrapper[5034]: I0105 22:22:51.262704 5034 generic.go:334] "Generic (PLEG): container finished" podID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerID="79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c" exitCode=0 Jan 05 22:22:51 crc kubenswrapper[5034]: I0105 22:22:51.262772 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brpjz" event={"ID":"74183d4d-2bcf-4e23-823c-d1c9fa205f11","Type":"ContainerDied","Data":"79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c"} Jan 05 22:22:51 crc kubenswrapper[5034]: I0105 22:22:51.262796 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brpjz" event={"ID":"74183d4d-2bcf-4e23-823c-d1c9fa205f11","Type":"ContainerStarted","Data":"efb60bec3fd97e0d8c8b5a0e52a623a8bef23bae7e3ae79adc2dc64b0366e4c1"} Jan 05 22:22:51 crc kubenswrapper[5034]: I0105 22:22:51.266971 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n28kh" event={"ID":"2b279e81-e2fb-43bc-bc13-2f7b750073f9","Type":"ContainerStarted","Data":"671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9"} Jan 05 22:22:51 crc kubenswrapper[5034]: I0105 22:22:51.304678 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n28kh" podStartSLOduration=2.789166848 podStartE2EDuration="5.304657338s" podCreationTimestamp="2026-01-05 22:22:46 +0000 UTC" firstStartedPulling="2026-01-05 22:22:48.225446206 +0000 UTC m=+1860.597445665" lastFinishedPulling="2026-01-05 22:22:50.740936716 +0000 UTC m=+1863.112936155" observedRunningTime="2026-01-05 22:22:51.298511613 +0000 UTC m=+1863.670511052" watchObservedRunningTime="2026-01-05 22:22:51.304657338 +0000 UTC m=+1863.676656797" Jan 05 22:22:52 crc kubenswrapper[5034]: I0105 22:22:52.276100 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brpjz" event={"ID":"74183d4d-2bcf-4e23-823c-d1c9fa205f11","Type":"ContainerStarted","Data":"8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873"} Jan 05 22:22:52 crc kubenswrapper[5034]: I0105 22:22:52.278277 5034 generic.go:334] "Generic (PLEG): container finished" podID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerID="5d6740f4190f0e4353520e649da914f837cc469cc62907346dc2aa7dc00e8015" exitCode=0 Jan 05 22:22:52 crc kubenswrapper[5034]: I0105 22:22:52.279182 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgfwx" event={"ID":"3584e6ad-d8e6-4f94-917e-fb321d6938e9","Type":"ContainerDied","Data":"5d6740f4190f0e4353520e649da914f837cc469cc62907346dc2aa7dc00e8015"} Jan 05 22:22:53 crc kubenswrapper[5034]: I0105 22:22:53.286605 5034 generic.go:334] "Generic (PLEG): container finished" podID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerID="8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873" exitCode=0 Jan 05 22:22:53 crc kubenswrapper[5034]: I0105 22:22:53.286667 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brpjz" event={"ID":"74183d4d-2bcf-4e23-823c-d1c9fa205f11","Type":"ContainerDied","Data":"8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873"} Jan 05 22:22:53 crc kubenswrapper[5034]: I0105 22:22:53.288655 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgfwx" event={"ID":"3584e6ad-d8e6-4f94-917e-fb321d6938e9","Type":"ContainerStarted","Data":"847c39407f442873450eb5f3b028fa54cf6abba1bcac94c6d9b0b945d62fc60f"} Jan 05 22:22:53 crc kubenswrapper[5034]: I0105 22:22:53.326907 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sgfwx" podStartSLOduration=1.8824924360000002 podStartE2EDuration="4.326889488s" podCreationTimestamp="2026-01-05 22:22:49 +0000 UTC" firstStartedPulling="2026-01-05 22:22:50.245178544 +0000 UTC m=+1862.617177993" lastFinishedPulling="2026-01-05 22:22:52.689575606 +0000 UTC m=+1865.061575045" observedRunningTime="2026-01-05 22:22:53.324402568 +0000 UTC m=+1865.696402007" watchObservedRunningTime="2026-01-05 22:22:53.326889488 +0000 UTC m=+1865.698888927" Jan 05 22:22:54 crc kubenswrapper[5034]: I0105 22:22:54.300208 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brpjz" event={"ID":"74183d4d-2bcf-4e23-823c-d1c9fa205f11","Type":"ContainerStarted","Data":"5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4"} Jan 05 22:22:54 crc kubenswrapper[5034]: I0105 22:22:54.320175 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-brpjz" podStartSLOduration=2.851022467 podStartE2EDuration="5.320116219s" podCreationTimestamp="2026-01-05 22:22:49 +0000 UTC" firstStartedPulling="2026-01-05 22:22:51.264430595 +0000 UTC m=+1863.636430034" lastFinishedPulling="2026-01-05 22:22:53.733524347 +0000 UTC m=+1866.105523786" observedRunningTime="2026-01-05 22:22:54.319672907 +0000 UTC m=+1866.691672336" watchObservedRunningTime="2026-01-05 22:22:54.320116219 +0000 UTC m=+1866.692115718" Jan 05 22:22:57 crc kubenswrapper[5034]: I0105 22:22:57.135770 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:57 crc kubenswrapper[5034]: I0105 22:22:57.136284 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:57 crc kubenswrapper[5034]: I0105 22:22:57.209891 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:57 crc kubenswrapper[5034]: I0105 22:22:57.365601 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:22:58 crc kubenswrapper[5034]: I0105 22:22:58.183588 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n28kh"] Jan 05 22:22:59 crc kubenswrapper[5034]: I0105 22:22:59.335703 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n28kh" podUID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerName="registry-server" containerID="cri-o://671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9" gracePeriod=2 Jan 05 22:22:59 crc kubenswrapper[5034]: I0105 22:22:59.529334 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:59 crc kubenswrapper[5034]: I0105 22:22:59.529376 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:22:59 crc kubenswrapper[5034]: I0105 22:22:59.575817 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.138875 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.139232 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.177950 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.385063 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.385517 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.793229 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.889032 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-utilities\") pod \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.889329 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6npt8\" (UniqueName: \"kubernetes.io/projected/2b279e81-e2fb-43bc-bc13-2f7b750073f9-kube-api-access-6npt8\") pod \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.889376 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-catalog-content\") pod \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\" (UID: \"2b279e81-e2fb-43bc-bc13-2f7b750073f9\") " Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.890607 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-utilities" (OuterVolumeSpecName: "utilities") pod "2b279e81-e2fb-43bc-bc13-2f7b750073f9" (UID: "2b279e81-e2fb-43bc-bc13-2f7b750073f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.895274 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b279e81-e2fb-43bc-bc13-2f7b750073f9-kube-api-access-6npt8" (OuterVolumeSpecName: "kube-api-access-6npt8") pod "2b279e81-e2fb-43bc-bc13-2f7b750073f9" (UID: "2b279e81-e2fb-43bc-bc13-2f7b750073f9"). InnerVolumeSpecName "kube-api-access-6npt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.943997 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b279e81-e2fb-43bc-bc13-2f7b750073f9" (UID: "2b279e81-e2fb-43bc-bc13-2f7b750073f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.991635 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6npt8\" (UniqueName: \"kubernetes.io/projected/2b279e81-e2fb-43bc-bc13-2f7b750073f9-kube-api-access-6npt8\") on node \"crc\" DevicePath \"\"" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.991978 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:23:00 crc kubenswrapper[5034]: I0105 22:23:00.992041 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b279e81-e2fb-43bc-bc13-2f7b750073f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.355071 5034 generic.go:334] "Generic (PLEG): container finished" podID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerID="671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9" exitCode=0 Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.355155 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n28kh" event={"ID":"2b279e81-e2fb-43bc-bc13-2f7b750073f9","Type":"ContainerDied","Data":"671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9"} Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.355198 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n28kh" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.355700 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n28kh" event={"ID":"2b279e81-e2fb-43bc-bc13-2f7b750073f9","Type":"ContainerDied","Data":"aed2a2659348f9cfc5cdd5ffb07d49ea03634d1ce2ac886387ad5f418c78c93a"} Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.355730 5034 scope.go:117] "RemoveContainer" containerID="671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.379925 5034 scope.go:117] "RemoveContainer" containerID="fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.400805 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n28kh"] Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.405338 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n28kh"] Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.407001 5034 scope.go:117] "RemoveContainer" containerID="291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.444634 5034 scope.go:117] "RemoveContainer" containerID="671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9" Jan 05 22:23:01 crc kubenswrapper[5034]: E0105 22:23:01.445453 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9\": container with ID starting with 671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9 not found: ID does not exist" containerID="671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.445482 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9"} err="failed to get container status \"671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9\": rpc error: code = NotFound desc = could not find container \"671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9\": container with ID starting with 671063c37c5258ae9d642d70feeee6f6787aad6cdc12c7338336897da270e3c9 not found: ID does not exist" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.445504 5034 scope.go:117] "RemoveContainer" containerID="fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3" Jan 05 22:23:01 crc kubenswrapper[5034]: E0105 22:23:01.447162 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3\": container with ID starting with fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3 not found: ID does not exist" containerID="fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.447194 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3"} err="failed to get container status \"fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3\": rpc error: code = NotFound desc = could not find container \"fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3\": container with ID starting with fe4b0babb392db451539da0e2801aa5a58e6debe349c94e452c0f0f8e6cb3dd3 not found: ID does not exist" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.447209 5034 scope.go:117] "RemoveContainer" containerID="291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10" Jan 05 22:23:01 crc kubenswrapper[5034]: E0105 22:23:01.447599 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10\": container with ID starting with 291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10 not found: ID does not exist" containerID="291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.447622 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10"} err="failed to get container status \"291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10\": rpc error: code = NotFound desc = could not find container \"291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10\": container with ID starting with 291f278739795d94a624f085ac3fbbfcf5813d02febdb792d0ea7a6f3d3d6e10 not found: ID does not exist" Jan 05 22:23:01 crc kubenswrapper[5034]: I0105 22:23:01.855382 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" path="/var/lib/kubelet/pods/2b279e81-e2fb-43bc-bc13-2f7b750073f9/volumes" Jan 05 22:23:03 crc kubenswrapper[5034]: I0105 22:23:03.992698 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgfwx"] Jan 05 22:23:03 crc kubenswrapper[5034]: I0105 22:23:03.993642 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sgfwx" podUID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerName="registry-server" containerID="cri-o://847c39407f442873450eb5f3b028fa54cf6abba1bcac94c6d9b0b945d62fc60f" gracePeriod=2 Jan 05 22:23:04 crc kubenswrapper[5034]: I0105 22:23:04.388335 5034 generic.go:334] "Generic (PLEG): container finished" podID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerID="847c39407f442873450eb5f3b028fa54cf6abba1bcac94c6d9b0b945d62fc60f" exitCode=0 Jan 05 22:23:04 crc kubenswrapper[5034]: I0105 22:23:04.388386 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgfwx" event={"ID":"3584e6ad-d8e6-4f94-917e-fb321d6938e9","Type":"ContainerDied","Data":"847c39407f442873450eb5f3b028fa54cf6abba1bcac94c6d9b0b945d62fc60f"} Jan 05 22:23:04 crc kubenswrapper[5034]: I0105 22:23:04.965839 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.056246 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qql7n\" (UniqueName: \"kubernetes.io/projected/3584e6ad-d8e6-4f94-917e-fb321d6938e9-kube-api-access-qql7n\") pod \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.056380 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-utilities\") pod \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.056475 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-catalog-content\") pod \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\" (UID: \"3584e6ad-d8e6-4f94-917e-fb321d6938e9\") " Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.057488 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-utilities" (OuterVolumeSpecName: "utilities") pod "3584e6ad-d8e6-4f94-917e-fb321d6938e9" (UID: "3584e6ad-d8e6-4f94-917e-fb321d6938e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.064235 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3584e6ad-d8e6-4f94-917e-fb321d6938e9-kube-api-access-qql7n" (OuterVolumeSpecName: "kube-api-access-qql7n") pod "3584e6ad-d8e6-4f94-917e-fb321d6938e9" (UID: "3584e6ad-d8e6-4f94-917e-fb321d6938e9"). InnerVolumeSpecName "kube-api-access-qql7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.081834 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3584e6ad-d8e6-4f94-917e-fb321d6938e9" (UID: "3584e6ad-d8e6-4f94-917e-fb321d6938e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.158902 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.158947 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3584e6ad-d8e6-4f94-917e-fb321d6938e9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.158963 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qql7n\" (UniqueName: \"kubernetes.io/projected/3584e6ad-d8e6-4f94-917e-fb321d6938e9-kube-api-access-qql7n\") on node \"crc\" DevicePath \"\"" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.397476 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgfwx" event={"ID":"3584e6ad-d8e6-4f94-917e-fb321d6938e9","Type":"ContainerDied","Data":"d5178d68a8c8521022646e591fbaa899a6d961b654c07e0426e7b759c89f7ff2"} Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.397562 5034 scope.go:117] "RemoveContainer" containerID="847c39407f442873450eb5f3b028fa54cf6abba1bcac94c6d9b0b945d62fc60f" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.397706 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgfwx" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.432180 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgfwx"] Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.432901 5034 scope.go:117] "RemoveContainer" containerID="5d6740f4190f0e4353520e649da914f837cc469cc62907346dc2aa7dc00e8015" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.438291 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgfwx"] Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.451684 5034 scope.go:117] "RemoveContainer" containerID="8e5d7cbc01af02dab692a35857476b8be917f10281ef0d431b014ce71360e4b4" Jan 05 22:23:05 crc kubenswrapper[5034]: I0105 22:23:05.847266 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" path="/var/lib/kubelet/pods/3584e6ad-d8e6-4f94-917e-fb321d6938e9/volumes" Jan 05 22:23:06 crc kubenswrapper[5034]: I0105 22:23:06.983818 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brpjz"] Jan 05 22:23:06 crc kubenswrapper[5034]: I0105 22:23:06.984070 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-brpjz" podUID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerName="registry-server" containerID="cri-o://5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4" gracePeriod=2 Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.380859 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.421575 5034 generic.go:334] "Generic (PLEG): container finished" podID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerID="5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4" exitCode=0 Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.421630 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brpjz" event={"ID":"74183d4d-2bcf-4e23-823c-d1c9fa205f11","Type":"ContainerDied","Data":"5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4"} Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.421670 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brpjz" event={"ID":"74183d4d-2bcf-4e23-823c-d1c9fa205f11","Type":"ContainerDied","Data":"efb60bec3fd97e0d8c8b5a0e52a623a8bef23bae7e3ae79adc2dc64b0366e4c1"} Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.421694 5034 scope.go:117] "RemoveContainer" containerID="5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.421753 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brpjz" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.448208 5034 scope.go:117] "RemoveContainer" containerID="8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.468175 5034 scope.go:117] "RemoveContainer" containerID="79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.488555 5034 scope.go:117] "RemoveContainer" containerID="5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4" Jan 05 22:23:07 crc kubenswrapper[5034]: E0105 22:23:07.489230 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4\": container with ID starting with 5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4 not found: ID does not exist" containerID="5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.489269 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4"} err="failed to get container status \"5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4\": rpc error: code = NotFound desc = could not find container \"5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4\": container with ID starting with 5994e3b2d914a83884be1aff13006cb8cee5be5637c69caa0d0b79fb5042aed4 not found: ID does not exist" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.489297 5034 scope.go:117] "RemoveContainer" containerID="8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873" Jan 05 22:23:07 crc kubenswrapper[5034]: E0105 22:23:07.489605 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873\": container with ID starting with 8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873 not found: ID does not exist" containerID="8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.489634 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873"} err="failed to get container status \"8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873\": rpc error: code = NotFound desc = could not find container \"8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873\": container with ID starting with 8ce4304b8fad5ad1cd60b1259402a3c3b046b767fe5798c20a46ffd3d3830873 not found: ID does not exist" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.489654 5034 scope.go:117] "RemoveContainer" containerID="79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c" Jan 05 22:23:07 crc kubenswrapper[5034]: E0105 22:23:07.490713 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c\": container with ID starting with 79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c not found: ID does not exist" containerID="79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.490822 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c"} err="failed to get container status \"79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c\": rpc error: code = NotFound desc = could not find container \"79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c\": container with ID starting with 79f6b45cbda918f302c7be9ae9dfcfe47744eb653d9c0bd692b55898974e998c not found: ID does not exist" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.502190 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-utilities\") pod \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.502302 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-catalog-content\") pod \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.502346 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz2z4\" (UniqueName: \"kubernetes.io/projected/74183d4d-2bcf-4e23-823c-d1c9fa205f11-kube-api-access-tz2z4\") pod \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\" (UID: \"74183d4d-2bcf-4e23-823c-d1c9fa205f11\") " Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.503310 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-utilities" (OuterVolumeSpecName: "utilities") pod "74183d4d-2bcf-4e23-823c-d1c9fa205f11" (UID: "74183d4d-2bcf-4e23-823c-d1c9fa205f11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.512422 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74183d4d-2bcf-4e23-823c-d1c9fa205f11-kube-api-access-tz2z4" (OuterVolumeSpecName: "kube-api-access-tz2z4") pod "74183d4d-2bcf-4e23-823c-d1c9fa205f11" (UID: "74183d4d-2bcf-4e23-823c-d1c9fa205f11"). InnerVolumeSpecName "kube-api-access-tz2z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.604152 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.604197 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz2z4\" (UniqueName: \"kubernetes.io/projected/74183d4d-2bcf-4e23-823c-d1c9fa205f11-kube-api-access-tz2z4\") on node \"crc\" DevicePath \"\"" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.626822 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74183d4d-2bcf-4e23-823c-d1c9fa205f11" (UID: "74183d4d-2bcf-4e23-823c-d1c9fa205f11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.705753 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74183d4d-2bcf-4e23-823c-d1c9fa205f11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.755427 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brpjz"] Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.761497 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-brpjz"] Jan 05 22:23:07 crc kubenswrapper[5034]: I0105 22:23:07.850465 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" path="/var/lib/kubelet/pods/74183d4d-2bcf-4e23-823c-d1c9fa205f11/volumes" Jan 05 22:24:20 crc kubenswrapper[5034]: I0105 22:24:20.468988 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:24:20 crc kubenswrapper[5034]: I0105 22:24:20.470055 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:24:50 crc kubenswrapper[5034]: I0105 22:24:50.468925 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:24:50 crc kubenswrapper[5034]: I0105 22:24:50.469506 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:25:20 crc kubenswrapper[5034]: I0105 22:25:20.470191 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:25:20 crc kubenswrapper[5034]: I0105 22:25:20.471248 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:25:20 crc kubenswrapper[5034]: I0105 22:25:20.471328 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:25:20 crc kubenswrapper[5034]: I0105 22:25:20.472416 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d80225dd0b406f15097d94022cacc358d9a138eb17e7966f964e37fdca8b2d73"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:25:20 crc kubenswrapper[5034]: I0105 22:25:20.472499 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://d80225dd0b406f15097d94022cacc358d9a138eb17e7966f964e37fdca8b2d73" gracePeriod=600 Jan 05 22:25:21 crc kubenswrapper[5034]: I0105 22:25:21.512943 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="d80225dd0b406f15097d94022cacc358d9a138eb17e7966f964e37fdca8b2d73" exitCode=0 Jan 05 22:25:21 crc kubenswrapper[5034]: I0105 22:25:21.513025 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"d80225dd0b406f15097d94022cacc358d9a138eb17e7966f964e37fdca8b2d73"} Jan 05 22:25:21 crc kubenswrapper[5034]: I0105 22:25:21.513685 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e"} Jan 05 22:25:21 crc kubenswrapper[5034]: I0105 22:25:21.513709 5034 scope.go:117] "RemoveContainer" containerID="8c61a2f611ad7f7e09c51c813ce06c0884c2b491b3ff7b64fd5a3dec02d9888f" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.362156 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jlcfq"] Jan 05 22:26:55 crc kubenswrapper[5034]: E0105 22:26:55.363468 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerName="extract-utilities" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363492 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerName="extract-utilities" Jan 05 22:26:55 crc kubenswrapper[5034]: E0105 22:26:55.363517 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerName="extract-content" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363526 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerName="extract-content" Jan 05 22:26:55 crc kubenswrapper[5034]: E0105 22:26:55.363543 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerName="registry-server" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363551 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerName="registry-server" Jan 05 22:26:55 crc kubenswrapper[5034]: E0105 22:26:55.363561 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerName="extract-utilities" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363567 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerName="extract-utilities" Jan 05 22:26:55 crc kubenswrapper[5034]: E0105 22:26:55.363583 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerName="extract-content" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363589 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerName="extract-content" Jan 05 22:26:55 crc kubenswrapper[5034]: E0105 22:26:55.363605 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerName="registry-server" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363612 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerName="registry-server" Jan 05 22:26:55 crc kubenswrapper[5034]: E0105 22:26:55.363622 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerName="extract-utilities" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363632 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerName="extract-utilities" Jan 05 22:26:55 crc kubenswrapper[5034]: E0105 22:26:55.363644 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerName="registry-server" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363669 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerName="registry-server" Jan 05 22:26:55 crc kubenswrapper[5034]: E0105 22:26:55.363686 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerName="extract-content" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363692 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerName="extract-content" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363842 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="74183d4d-2bcf-4e23-823c-d1c9fa205f11" containerName="registry-server" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363859 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b279e81-e2fb-43bc-bc13-2f7b750073f9" containerName="registry-server" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.363867 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="3584e6ad-d8e6-4f94-917e-fb321d6938e9" containerName="registry-server" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.365194 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.376566 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jlcfq"] Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.530695 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrn82\" (UniqueName: \"kubernetes.io/projected/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-kube-api-access-wrn82\") pod \"certified-operators-jlcfq\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.531059 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-catalog-content\") pod \"certified-operators-jlcfq\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.531348 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-utilities\") pod \"certified-operators-jlcfq\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.633607 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-utilities\") pod \"certified-operators-jlcfq\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.633732 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrn82\" (UniqueName: \"kubernetes.io/projected/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-kube-api-access-wrn82\") pod \"certified-operators-jlcfq\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.633825 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-catalog-content\") pod \"certified-operators-jlcfq\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.634987 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-catalog-content\") pod \"certified-operators-jlcfq\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.635015 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-utilities\") pod \"certified-operators-jlcfq\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.659774 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrn82\" (UniqueName: \"kubernetes.io/projected/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-kube-api-access-wrn82\") pod \"certified-operators-jlcfq\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:55 crc kubenswrapper[5034]: I0105 22:26:55.703511 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:26:56 crc kubenswrapper[5034]: I0105 22:26:56.237559 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jlcfq"] Jan 05 22:26:56 crc kubenswrapper[5034]: I0105 22:26:56.264975 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlcfq" event={"ID":"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24","Type":"ContainerStarted","Data":"b411a3485315efd7f9a932f07a9793b6cbabd209c971a956394dfa569dd6df53"} Jan 05 22:26:57 crc kubenswrapper[5034]: I0105 22:26:57.275479 5034 generic.go:334] "Generic (PLEG): container finished" podID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerID="4397fc779915e3c524b257d9bc562cda731ceafc76465a758e4a78878db2ea84" exitCode=0 Jan 05 22:26:57 crc kubenswrapper[5034]: I0105 22:26:57.275616 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlcfq" event={"ID":"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24","Type":"ContainerDied","Data":"4397fc779915e3c524b257d9bc562cda731ceafc76465a758e4a78878db2ea84"} Jan 05 22:26:58 crc kubenswrapper[5034]: I0105 22:26:58.286223 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlcfq" event={"ID":"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24","Type":"ContainerStarted","Data":"b7cbd2ed2e4f5d5f99dadcb804e865cd398f868d8b0837593209120bd9719551"} Jan 05 22:26:59 crc kubenswrapper[5034]: I0105 22:26:59.295385 5034 generic.go:334] "Generic (PLEG): container finished" podID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerID="b7cbd2ed2e4f5d5f99dadcb804e865cd398f868d8b0837593209120bd9719551" exitCode=0 Jan 05 22:26:59 crc kubenswrapper[5034]: I0105 22:26:59.295438 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlcfq" event={"ID":"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24","Type":"ContainerDied","Data":"b7cbd2ed2e4f5d5f99dadcb804e865cd398f868d8b0837593209120bd9719551"} Jan 05 22:27:00 crc kubenswrapper[5034]: I0105 22:27:00.305298 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlcfq" event={"ID":"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24","Type":"ContainerStarted","Data":"7d2d9ba0f4ef011005cf09d64600d1990c43adaf12c7abf24e7e038208f31a90"} Jan 05 22:27:00 crc kubenswrapper[5034]: I0105 22:27:00.328638 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jlcfq" podStartSLOduration=2.716302778 podStartE2EDuration="5.328614633s" podCreationTimestamp="2026-01-05 22:26:55 +0000 UTC" firstStartedPulling="2026-01-05 22:26:57.278920557 +0000 UTC m=+2109.650919996" lastFinishedPulling="2026-01-05 22:26:59.891232412 +0000 UTC m=+2112.263231851" observedRunningTime="2026-01-05 22:27:00.323386971 +0000 UTC m=+2112.695386440" watchObservedRunningTime="2026-01-05 22:27:00.328614633 +0000 UTC m=+2112.700614072" Jan 05 22:27:05 crc kubenswrapper[5034]: I0105 22:27:05.703953 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:27:05 crc kubenswrapper[5034]: I0105 22:27:05.707409 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:27:05 crc kubenswrapper[5034]: I0105 22:27:05.788468 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:27:06 crc kubenswrapper[5034]: I0105 22:27:06.444309 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:27:06 crc kubenswrapper[5034]: I0105 22:27:06.519370 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jlcfq"] Jan 05 22:27:08 crc kubenswrapper[5034]: I0105 22:27:08.378802 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jlcfq" podUID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerName="registry-server" containerID="cri-o://7d2d9ba0f4ef011005cf09d64600d1990c43adaf12c7abf24e7e038208f31a90" gracePeriod=2 Jan 05 22:27:09 crc kubenswrapper[5034]: I0105 22:27:09.390281 5034 generic.go:334] "Generic (PLEG): container finished" podID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerID="7d2d9ba0f4ef011005cf09d64600d1990c43adaf12c7abf24e7e038208f31a90" exitCode=0 Jan 05 22:27:09 crc kubenswrapper[5034]: I0105 22:27:09.390368 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlcfq" event={"ID":"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24","Type":"ContainerDied","Data":"7d2d9ba0f4ef011005cf09d64600d1990c43adaf12c7abf24e7e038208f31a90"} Jan 05 22:27:09 crc kubenswrapper[5034]: I0105 22:27:09.930851 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.081441 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-catalog-content\") pod \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.081500 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-utilities\") pod \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.081615 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrn82\" (UniqueName: \"kubernetes.io/projected/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-kube-api-access-wrn82\") pod \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\" (UID: \"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24\") " Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.082856 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-utilities" (OuterVolumeSpecName: "utilities") pod "61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" (UID: "61bb2c64-09c7-4f11-9e7b-c3e4bb329d24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.090583 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-kube-api-access-wrn82" (OuterVolumeSpecName: "kube-api-access-wrn82") pod "61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" (UID: "61bb2c64-09c7-4f11-9e7b-c3e4bb329d24"). InnerVolumeSpecName "kube-api-access-wrn82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.137734 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" (UID: "61bb2c64-09c7-4f11-9e7b-c3e4bb329d24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.183321 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrn82\" (UniqueName: \"kubernetes.io/projected/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-kube-api-access-wrn82\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.183360 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.183370 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.403853 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlcfq" event={"ID":"61bb2c64-09c7-4f11-9e7b-c3e4bb329d24","Type":"ContainerDied","Data":"b411a3485315efd7f9a932f07a9793b6cbabd209c971a956394dfa569dd6df53"} Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.403914 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jlcfq" Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.403919 5034 scope.go:117] "RemoveContainer" containerID="7d2d9ba0f4ef011005cf09d64600d1990c43adaf12c7abf24e7e038208f31a90" Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.425597 5034 scope.go:117] "RemoveContainer" containerID="b7cbd2ed2e4f5d5f99dadcb804e865cd398f868d8b0837593209120bd9719551" Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.456965 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jlcfq"] Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.464635 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jlcfq"] Jan 05 22:27:10 crc kubenswrapper[5034]: I0105 22:27:10.468517 5034 scope.go:117] "RemoveContainer" containerID="4397fc779915e3c524b257d9bc562cda731ceafc76465a758e4a78878db2ea84" Jan 05 22:27:11 crc kubenswrapper[5034]: I0105 22:27:11.852971 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" path="/var/lib/kubelet/pods/61bb2c64-09c7-4f11-9e7b-c3e4bb329d24/volumes" Jan 05 22:27:20 crc kubenswrapper[5034]: I0105 22:27:20.469202 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:27:20 crc kubenswrapper[5034]: I0105 22:27:20.469511 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:27:50 crc kubenswrapper[5034]: I0105 22:27:50.469578 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:27:50 crc kubenswrapper[5034]: I0105 22:27:50.470219 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:28:20 crc kubenswrapper[5034]: I0105 22:28:20.468977 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:28:20 crc kubenswrapper[5034]: I0105 22:28:20.469512 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:28:20 crc kubenswrapper[5034]: I0105 22:28:20.469558 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:28:20 crc kubenswrapper[5034]: I0105 22:28:20.470176 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:28:20 crc kubenswrapper[5034]: I0105 22:28:20.470229 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" gracePeriod=600 Jan 05 22:28:20 crc kubenswrapper[5034]: E0105 22:28:20.591703 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:28:20 crc kubenswrapper[5034]: I0105 22:28:20.942277 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" exitCode=0 Jan 05 22:28:20 crc kubenswrapper[5034]: I0105 22:28:20.942323 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e"} Jan 05 22:28:20 crc kubenswrapper[5034]: I0105 22:28:20.942357 5034 scope.go:117] "RemoveContainer" containerID="d80225dd0b406f15097d94022cacc358d9a138eb17e7966f964e37fdca8b2d73" Jan 05 22:28:20 crc kubenswrapper[5034]: I0105 22:28:20.942718 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:28:20 crc kubenswrapper[5034]: E0105 22:28:20.942962 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:28:34 crc kubenswrapper[5034]: I0105 22:28:34.839014 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:28:34 crc kubenswrapper[5034]: E0105 22:28:34.839817 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:28:48 crc kubenswrapper[5034]: I0105 22:28:48.837799 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:28:48 crc kubenswrapper[5034]: E0105 22:28:48.838472 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:29:03 crc kubenswrapper[5034]: I0105 22:29:03.839436 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:29:03 crc kubenswrapper[5034]: E0105 22:29:03.841052 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:29:17 crc kubenswrapper[5034]: I0105 22:29:17.844821 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:29:17 crc kubenswrapper[5034]: E0105 22:29:17.845802 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:29:30 crc kubenswrapper[5034]: I0105 22:29:30.839153 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:29:30 crc kubenswrapper[5034]: E0105 22:29:30.840604 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:29:41 crc kubenswrapper[5034]: I0105 22:29:41.838056 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:29:41 crc kubenswrapper[5034]: E0105 22:29:41.838803 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:29:56 crc kubenswrapper[5034]: I0105 22:29:56.838788 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:29:56 crc kubenswrapper[5034]: E0105 22:29:56.839536 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.146280 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6"] Jan 05 22:30:00 crc kubenswrapper[5034]: E0105 22:30:00.146907 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerName="extract-content" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.146920 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerName="extract-content" Jan 05 22:30:00 crc kubenswrapper[5034]: E0105 22:30:00.146937 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerName="registry-server" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.146944 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerName="registry-server" Jan 05 22:30:00 crc kubenswrapper[5034]: E0105 22:30:00.146965 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerName="extract-utilities" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.146972 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerName="extract-utilities" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.147132 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="61bb2c64-09c7-4f11-9e7b-c3e4bb329d24" containerName="registry-server" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.147759 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.151905 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.151908 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.161484 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6"] Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.278169 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8t27\" (UniqueName: \"kubernetes.io/projected/915ee5d3-c02a-4ed3-a0b9-9e9490620077-kube-api-access-q8t27\") pod \"collect-profiles-29460870-55ts6\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.278229 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/915ee5d3-c02a-4ed3-a0b9-9e9490620077-secret-volume\") pod \"collect-profiles-29460870-55ts6\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.278352 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/915ee5d3-c02a-4ed3-a0b9-9e9490620077-config-volume\") pod \"collect-profiles-29460870-55ts6\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.379944 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8t27\" (UniqueName: \"kubernetes.io/projected/915ee5d3-c02a-4ed3-a0b9-9e9490620077-kube-api-access-q8t27\") pod \"collect-profiles-29460870-55ts6\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.380009 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/915ee5d3-c02a-4ed3-a0b9-9e9490620077-secret-volume\") pod \"collect-profiles-29460870-55ts6\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.380145 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/915ee5d3-c02a-4ed3-a0b9-9e9490620077-config-volume\") pod \"collect-profiles-29460870-55ts6\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.381057 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/915ee5d3-c02a-4ed3-a0b9-9e9490620077-config-volume\") pod \"collect-profiles-29460870-55ts6\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.386525 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/915ee5d3-c02a-4ed3-a0b9-9e9490620077-secret-volume\") pod \"collect-profiles-29460870-55ts6\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.398899 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8t27\" (UniqueName: \"kubernetes.io/projected/915ee5d3-c02a-4ed3-a0b9-9e9490620077-kube-api-access-q8t27\") pod \"collect-profiles-29460870-55ts6\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.465046 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:00 crc kubenswrapper[5034]: I0105 22:30:00.908292 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6"] Jan 05 22:30:00 crc kubenswrapper[5034]: W0105 22:30:00.917573 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod915ee5d3_c02a_4ed3_a0b9_9e9490620077.slice/crio-47eb1afcc6190c05b0e0c2793fa5e61de182ef2c7f689e344a348d625e40f163 WatchSource:0}: Error finding container 47eb1afcc6190c05b0e0c2793fa5e61de182ef2c7f689e344a348d625e40f163: Status 404 returned error can't find the container with id 47eb1afcc6190c05b0e0c2793fa5e61de182ef2c7f689e344a348d625e40f163 Jan 05 22:30:01 crc kubenswrapper[5034]: I0105 22:30:01.746058 5034 generic.go:334] "Generic (PLEG): container finished" podID="915ee5d3-c02a-4ed3-a0b9-9e9490620077" containerID="f3ec7983642ffebe89b45dc29276ed0a83f174dafdbda33664d9e0d2b5702f16" exitCode=0 Jan 05 22:30:01 crc kubenswrapper[5034]: I0105 22:30:01.746121 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" event={"ID":"915ee5d3-c02a-4ed3-a0b9-9e9490620077","Type":"ContainerDied","Data":"f3ec7983642ffebe89b45dc29276ed0a83f174dafdbda33664d9e0d2b5702f16"} Jan 05 22:30:01 crc kubenswrapper[5034]: I0105 22:30:01.746149 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" event={"ID":"915ee5d3-c02a-4ed3-a0b9-9e9490620077","Type":"ContainerStarted","Data":"47eb1afcc6190c05b0e0c2793fa5e61de182ef2c7f689e344a348d625e40f163"} Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.032253 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.222123 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8t27\" (UniqueName: \"kubernetes.io/projected/915ee5d3-c02a-4ed3-a0b9-9e9490620077-kube-api-access-q8t27\") pod \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.222284 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/915ee5d3-c02a-4ed3-a0b9-9e9490620077-secret-volume\") pod \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.222333 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/915ee5d3-c02a-4ed3-a0b9-9e9490620077-config-volume\") pod \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\" (UID: \"915ee5d3-c02a-4ed3-a0b9-9e9490620077\") " Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.223398 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/915ee5d3-c02a-4ed3-a0b9-9e9490620077-config-volume" (OuterVolumeSpecName: "config-volume") pod "915ee5d3-c02a-4ed3-a0b9-9e9490620077" (UID: "915ee5d3-c02a-4ed3-a0b9-9e9490620077"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.228594 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915ee5d3-c02a-4ed3-a0b9-9e9490620077-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "915ee5d3-c02a-4ed3-a0b9-9e9490620077" (UID: "915ee5d3-c02a-4ed3-a0b9-9e9490620077"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.228654 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915ee5d3-c02a-4ed3-a0b9-9e9490620077-kube-api-access-q8t27" (OuterVolumeSpecName: "kube-api-access-q8t27") pod "915ee5d3-c02a-4ed3-a0b9-9e9490620077" (UID: "915ee5d3-c02a-4ed3-a0b9-9e9490620077"). InnerVolumeSpecName "kube-api-access-q8t27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.323528 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8t27\" (UniqueName: \"kubernetes.io/projected/915ee5d3-c02a-4ed3-a0b9-9e9490620077-kube-api-access-q8t27\") on node \"crc\" DevicePath \"\"" Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.323930 5034 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/915ee5d3-c02a-4ed3-a0b9-9e9490620077-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.323946 5034 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/915ee5d3-c02a-4ed3-a0b9-9e9490620077-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.760798 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" event={"ID":"915ee5d3-c02a-4ed3-a0b9-9e9490620077","Type":"ContainerDied","Data":"47eb1afcc6190c05b0e0c2793fa5e61de182ef2c7f689e344a348d625e40f163"} Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.760841 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47eb1afcc6190c05b0e0c2793fa5e61de182ef2c7f689e344a348d625e40f163" Jan 05 22:30:03 crc kubenswrapper[5034]: I0105 22:30:03.760892 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6" Jan 05 22:30:04 crc kubenswrapper[5034]: I0105 22:30:04.116288 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh"] Jan 05 22:30:04 crc kubenswrapper[5034]: I0105 22:30:04.125343 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-ck6dh"] Jan 05 22:30:05 crc kubenswrapper[5034]: I0105 22:30:05.847414 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e89bb2-84f3-407b-966b-b1774d96da98" path="/var/lib/kubelet/pods/21e89bb2-84f3-407b-966b-b1774d96da98/volumes" Jan 05 22:30:11 crc kubenswrapper[5034]: I0105 22:30:11.838925 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:30:11 crc kubenswrapper[5034]: E0105 22:30:11.839784 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:30:22 crc kubenswrapper[5034]: I0105 22:30:22.839059 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:30:22 crc kubenswrapper[5034]: E0105 22:30:22.840496 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:30:37 crc kubenswrapper[5034]: I0105 22:30:37.844644 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:30:37 crc kubenswrapper[5034]: E0105 22:30:37.845426 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:30:49 crc kubenswrapper[5034]: I0105 22:30:49.839104 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:30:49 crc kubenswrapper[5034]: E0105 22:30:49.839896 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:30:57 crc kubenswrapper[5034]: I0105 22:30:57.105481 5034 scope.go:117] "RemoveContainer" containerID="0224fa77fe6c113b5804236d76f23f37ceb0ffee097db135f0146f20807a68ca" Jan 05 22:31:00 crc kubenswrapper[5034]: I0105 22:31:00.839797 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:31:00 crc kubenswrapper[5034]: E0105 22:31:00.841798 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:31:11 crc kubenswrapper[5034]: I0105 22:31:11.838439 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:31:11 crc kubenswrapper[5034]: E0105 22:31:11.839348 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:31:24 crc kubenswrapper[5034]: I0105 22:31:24.839806 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:31:24 crc kubenswrapper[5034]: E0105 22:31:24.841355 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:31:36 crc kubenswrapper[5034]: I0105 22:31:36.838653 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:31:36 crc kubenswrapper[5034]: E0105 22:31:36.839431 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:31:49 crc kubenswrapper[5034]: I0105 22:31:49.839295 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:31:49 crc kubenswrapper[5034]: E0105 22:31:49.840643 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:32:01 crc kubenswrapper[5034]: I0105 22:32:01.838895 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:32:01 crc kubenswrapper[5034]: E0105 22:32:01.839979 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:32:15 crc kubenswrapper[5034]: I0105 22:32:15.839396 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:32:15 crc kubenswrapper[5034]: E0105 22:32:15.840556 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:32:29 crc kubenswrapper[5034]: I0105 22:32:29.838868 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:32:29 crc kubenswrapper[5034]: E0105 22:32:29.839626 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:32:41 crc kubenswrapper[5034]: I0105 22:32:41.838228 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:32:41 crc kubenswrapper[5034]: E0105 22:32:41.839594 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.303437 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nbhdx"] Jan 05 22:32:50 crc kubenswrapper[5034]: E0105 22:32:50.304208 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915ee5d3-c02a-4ed3-a0b9-9e9490620077" containerName="collect-profiles" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.304225 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="915ee5d3-c02a-4ed3-a0b9-9e9490620077" containerName="collect-profiles" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.304454 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="915ee5d3-c02a-4ed3-a0b9-9e9490620077" containerName="collect-profiles" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.306037 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.321402 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nbhdx"] Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.472842 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-utilities\") pod \"redhat-operators-nbhdx\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.473139 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psshq\" (UniqueName: \"kubernetes.io/projected/bc3aa247-3dcf-46d8-9dab-116f633e4f50-kube-api-access-psshq\") pod \"redhat-operators-nbhdx\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.473314 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-catalog-content\") pod \"redhat-operators-nbhdx\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.575125 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psshq\" (UniqueName: \"kubernetes.io/projected/bc3aa247-3dcf-46d8-9dab-116f633e4f50-kube-api-access-psshq\") pod \"redhat-operators-nbhdx\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.575268 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-catalog-content\") pod \"redhat-operators-nbhdx\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.575970 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-catalog-content\") pod \"redhat-operators-nbhdx\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.576123 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-utilities\") pod \"redhat-operators-nbhdx\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.576390 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-utilities\") pod \"redhat-operators-nbhdx\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.597071 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psshq\" (UniqueName: \"kubernetes.io/projected/bc3aa247-3dcf-46d8-9dab-116f633e4f50-kube-api-access-psshq\") pod \"redhat-operators-nbhdx\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:50 crc kubenswrapper[5034]: I0105 22:32:50.635735 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:32:51 crc kubenswrapper[5034]: I0105 22:32:51.101788 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nbhdx"] Jan 05 22:32:52 crc kubenswrapper[5034]: I0105 22:32:52.117329 5034 generic.go:334] "Generic (PLEG): container finished" podID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerID="3fabadd408cd289f6eee173bc7bad0b39ca205f8a9beb311bb0a2cac01656f77" exitCode=0 Jan 05 22:32:52 crc kubenswrapper[5034]: I0105 22:32:52.117380 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhdx" event={"ID":"bc3aa247-3dcf-46d8-9dab-116f633e4f50","Type":"ContainerDied","Data":"3fabadd408cd289f6eee173bc7bad0b39ca205f8a9beb311bb0a2cac01656f77"} Jan 05 22:32:52 crc kubenswrapper[5034]: I0105 22:32:52.117432 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhdx" event={"ID":"bc3aa247-3dcf-46d8-9dab-116f633e4f50","Type":"ContainerStarted","Data":"fb0a0e195fa56bd948b18e5d14a01c20bafa8353991225ecba4d65a756a59817"} Jan 05 22:32:52 crc kubenswrapper[5034]: I0105 22:32:52.119248 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:32:52 crc kubenswrapper[5034]: I0105 22:32:52.838867 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:32:52 crc kubenswrapper[5034]: E0105 22:32:52.839305 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:32:53 crc kubenswrapper[5034]: I0105 22:32:53.138227 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhdx" event={"ID":"bc3aa247-3dcf-46d8-9dab-116f633e4f50","Type":"ContainerStarted","Data":"977d801c567dc78f3ce5a6de8b3552517c4f60b6d3ec77c89d6f9bbaa620a94c"} Jan 05 22:32:54 crc kubenswrapper[5034]: I0105 22:32:54.146516 5034 generic.go:334] "Generic (PLEG): container finished" podID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerID="977d801c567dc78f3ce5a6de8b3552517c4f60b6d3ec77c89d6f9bbaa620a94c" exitCode=0 Jan 05 22:32:54 crc kubenswrapper[5034]: I0105 22:32:54.146566 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhdx" event={"ID":"bc3aa247-3dcf-46d8-9dab-116f633e4f50","Type":"ContainerDied","Data":"977d801c567dc78f3ce5a6de8b3552517c4f60b6d3ec77c89d6f9bbaa620a94c"} Jan 05 22:32:55 crc kubenswrapper[5034]: I0105 22:32:55.154932 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhdx" event={"ID":"bc3aa247-3dcf-46d8-9dab-116f633e4f50","Type":"ContainerStarted","Data":"13674562a8c2355cd003af8f7c0e0432438af7fb6e056caa2f77d08f2e4a6638"} Jan 05 22:32:55 crc kubenswrapper[5034]: I0105 22:32:55.174688 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nbhdx" podStartSLOduration=2.725548576 podStartE2EDuration="5.174660412s" podCreationTimestamp="2026-01-05 22:32:50 +0000 UTC" firstStartedPulling="2026-01-05 22:32:52.118986536 +0000 UTC m=+2464.490985975" lastFinishedPulling="2026-01-05 22:32:54.568098372 +0000 UTC m=+2466.940097811" observedRunningTime="2026-01-05 22:32:55.171927894 +0000 UTC m=+2467.543927333" watchObservedRunningTime="2026-01-05 22:32:55.174660412 +0000 UTC m=+2467.546659861" Jan 05 22:33:00 crc kubenswrapper[5034]: I0105 22:33:00.636557 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:33:00 crc kubenswrapper[5034]: I0105 22:33:00.637772 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:33:00 crc kubenswrapper[5034]: I0105 22:33:00.699892 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:33:01 crc kubenswrapper[5034]: I0105 22:33:01.268962 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:33:01 crc kubenswrapper[5034]: I0105 22:33:01.318798 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nbhdx"] Jan 05 22:33:03 crc kubenswrapper[5034]: I0105 22:33:03.219212 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nbhdx" podUID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerName="registry-server" containerID="cri-o://13674562a8c2355cd003af8f7c0e0432438af7fb6e056caa2f77d08f2e4a6638" gracePeriod=2 Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.242659 5034 generic.go:334] "Generic (PLEG): container finished" podID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerID="13674562a8c2355cd003af8f7c0e0432438af7fb6e056caa2f77d08f2e4a6638" exitCode=0 Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.243203 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhdx" event={"ID":"bc3aa247-3dcf-46d8-9dab-116f633e4f50","Type":"ContainerDied","Data":"13674562a8c2355cd003af8f7c0e0432438af7fb6e056caa2f77d08f2e4a6638"} Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.314318 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.517225 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-catalog-content\") pod \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.518500 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psshq\" (UniqueName: \"kubernetes.io/projected/bc3aa247-3dcf-46d8-9dab-116f633e4f50-kube-api-access-psshq\") pod \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.518631 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-utilities\") pod \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\" (UID: \"bc3aa247-3dcf-46d8-9dab-116f633e4f50\") " Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.519541 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-utilities" (OuterVolumeSpecName: "utilities") pod "bc3aa247-3dcf-46d8-9dab-116f633e4f50" (UID: "bc3aa247-3dcf-46d8-9dab-116f633e4f50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.528490 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3aa247-3dcf-46d8-9dab-116f633e4f50-kube-api-access-psshq" (OuterVolumeSpecName: "kube-api-access-psshq") pod "bc3aa247-3dcf-46d8-9dab-116f633e4f50" (UID: "bc3aa247-3dcf-46d8-9dab-116f633e4f50"). InnerVolumeSpecName "kube-api-access-psshq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.619927 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psshq\" (UniqueName: \"kubernetes.io/projected/bc3aa247-3dcf-46d8-9dab-116f633e4f50-kube-api-access-psshq\") on node \"crc\" DevicePath \"\"" Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.619964 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.675854 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc3aa247-3dcf-46d8-9dab-116f633e4f50" (UID: "bc3aa247-3dcf-46d8-9dab-116f633e4f50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.721121 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3aa247-3dcf-46d8-9dab-116f633e4f50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:33:06 crc kubenswrapper[5034]: I0105 22:33:06.838459 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:33:06 crc kubenswrapper[5034]: E0105 22:33:06.839442 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:33:07 crc kubenswrapper[5034]: I0105 22:33:07.253564 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhdx" event={"ID":"bc3aa247-3dcf-46d8-9dab-116f633e4f50","Type":"ContainerDied","Data":"fb0a0e195fa56bd948b18e5d14a01c20bafa8353991225ecba4d65a756a59817"} Jan 05 22:33:07 crc kubenswrapper[5034]: I0105 22:33:07.253622 5034 scope.go:117] "RemoveContainer" containerID="13674562a8c2355cd003af8f7c0e0432438af7fb6e056caa2f77d08f2e4a6638" Jan 05 22:33:07 crc kubenswrapper[5034]: I0105 22:33:07.253797 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbhdx" Jan 05 22:33:07 crc kubenswrapper[5034]: I0105 22:33:07.290794 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nbhdx"] Jan 05 22:33:07 crc kubenswrapper[5034]: I0105 22:33:07.296448 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nbhdx"] Jan 05 22:33:07 crc kubenswrapper[5034]: I0105 22:33:07.301425 5034 scope.go:117] "RemoveContainer" containerID="977d801c567dc78f3ce5a6de8b3552517c4f60b6d3ec77c89d6f9bbaa620a94c" Jan 05 22:33:07 crc kubenswrapper[5034]: I0105 22:33:07.323539 5034 scope.go:117] "RemoveContainer" containerID="3fabadd408cd289f6eee173bc7bad0b39ca205f8a9beb311bb0a2cac01656f77" Jan 05 22:33:07 crc kubenswrapper[5034]: I0105 22:33:07.857980 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" path="/var/lib/kubelet/pods/bc3aa247-3dcf-46d8-9dab-116f633e4f50/volumes" Jan 05 22:33:21 crc kubenswrapper[5034]: I0105 22:33:21.838890 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:33:22 crc kubenswrapper[5034]: I0105 22:33:22.420165 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"bc5cb06c3c6a2023859862b1c61dde7c65fcef7432c175f100357f4726da8fb5"} Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.299288 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cls76"] Jan 05 22:33:30 crc kubenswrapper[5034]: E0105 22:33:30.300872 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerName="registry-server" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.300902 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerName="registry-server" Jan 05 22:33:30 crc kubenswrapper[5034]: E0105 22:33:30.300939 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerName="extract-utilities" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.300953 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerName="extract-utilities" Jan 05 22:33:30 crc kubenswrapper[5034]: E0105 22:33:30.300982 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerName="extract-content" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.300996 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerName="extract-content" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.301351 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3aa247-3dcf-46d8-9dab-116f633e4f50" containerName="registry-server" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.303563 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.311928 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cls76"] Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.480553 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-utilities\") pod \"redhat-marketplace-cls76\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.480597 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-catalog-content\") pod \"redhat-marketplace-cls76\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.480695 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dkxp\" (UniqueName: \"kubernetes.io/projected/0e1184a0-4654-421e-813a-d96ca215a38a-kube-api-access-5dkxp\") pod \"redhat-marketplace-cls76\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.582505 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-utilities\") pod \"redhat-marketplace-cls76\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.582596 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-catalog-content\") pod \"redhat-marketplace-cls76\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.582663 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dkxp\" (UniqueName: \"kubernetes.io/projected/0e1184a0-4654-421e-813a-d96ca215a38a-kube-api-access-5dkxp\") pod \"redhat-marketplace-cls76\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.583940 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-utilities\") pod \"redhat-marketplace-cls76\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.584172 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-catalog-content\") pod \"redhat-marketplace-cls76\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.613283 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dkxp\" (UniqueName: \"kubernetes.io/projected/0e1184a0-4654-421e-813a-d96ca215a38a-kube-api-access-5dkxp\") pod \"redhat-marketplace-cls76\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:30 crc kubenswrapper[5034]: I0105 22:33:30.639986 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:31 crc kubenswrapper[5034]: I0105 22:33:31.145122 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cls76"] Jan 05 22:33:31 crc kubenswrapper[5034]: I0105 22:33:31.506943 5034 generic.go:334] "Generic (PLEG): container finished" podID="0e1184a0-4654-421e-813a-d96ca215a38a" containerID="08f758d2f7d3242a9cab39166cd7f687b37bff678d27d2611d85aa01a500d962" exitCode=0 Jan 05 22:33:31 crc kubenswrapper[5034]: I0105 22:33:31.507033 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cls76" event={"ID":"0e1184a0-4654-421e-813a-d96ca215a38a","Type":"ContainerDied","Data":"08f758d2f7d3242a9cab39166cd7f687b37bff678d27d2611d85aa01a500d962"} Jan 05 22:33:31 crc kubenswrapper[5034]: I0105 22:33:31.508278 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cls76" event={"ID":"0e1184a0-4654-421e-813a-d96ca215a38a","Type":"ContainerStarted","Data":"d45a094ec996765249872fee9b3557c2027d807f7bb518f7c4d10a0668148456"} Jan 05 22:33:33 crc kubenswrapper[5034]: I0105 22:33:33.531251 5034 generic.go:334] "Generic (PLEG): container finished" podID="0e1184a0-4654-421e-813a-d96ca215a38a" containerID="3091ed3da41109a842c8fc8cfa2266a5a4e9a3d07db73e01a375410fe14614e6" exitCode=0 Jan 05 22:33:33 crc kubenswrapper[5034]: I0105 22:33:33.531326 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cls76" event={"ID":"0e1184a0-4654-421e-813a-d96ca215a38a","Type":"ContainerDied","Data":"3091ed3da41109a842c8fc8cfa2266a5a4e9a3d07db73e01a375410fe14614e6"} Jan 05 22:33:34 crc kubenswrapper[5034]: I0105 22:33:34.543417 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cls76" event={"ID":"0e1184a0-4654-421e-813a-d96ca215a38a","Type":"ContainerStarted","Data":"adb1f3f0351d27f1c1710d70e43939a45af281df3b046a6628ecbd6e26cc89e1"} Jan 05 22:33:34 crc kubenswrapper[5034]: I0105 22:33:34.570873 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cls76" podStartSLOduration=1.9796890679999999 podStartE2EDuration="4.57084577s" podCreationTimestamp="2026-01-05 22:33:30 +0000 UTC" firstStartedPulling="2026-01-05 22:33:31.51013877 +0000 UTC m=+2503.882138209" lastFinishedPulling="2026-01-05 22:33:34.101295472 +0000 UTC m=+2506.473294911" observedRunningTime="2026-01-05 22:33:34.566929648 +0000 UTC m=+2506.938929087" watchObservedRunningTime="2026-01-05 22:33:34.57084577 +0000 UTC m=+2506.942845209" Jan 05 22:33:40 crc kubenswrapper[5034]: I0105 22:33:40.640658 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:40 crc kubenswrapper[5034]: I0105 22:33:40.641345 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:40 crc kubenswrapper[5034]: I0105 22:33:40.698927 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:41 crc kubenswrapper[5034]: I0105 22:33:41.652169 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:41 crc kubenswrapper[5034]: I0105 22:33:41.710682 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cls76"] Jan 05 22:33:43 crc kubenswrapper[5034]: I0105 22:33:43.630560 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cls76" podUID="0e1184a0-4654-421e-813a-d96ca215a38a" containerName="registry-server" containerID="cri-o://adb1f3f0351d27f1c1710d70e43939a45af281df3b046a6628ecbd6e26cc89e1" gracePeriod=2 Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.647025 5034 generic.go:334] "Generic (PLEG): container finished" podID="0e1184a0-4654-421e-813a-d96ca215a38a" containerID="adb1f3f0351d27f1c1710d70e43939a45af281df3b046a6628ecbd6e26cc89e1" exitCode=0 Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.647126 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cls76" event={"ID":"0e1184a0-4654-421e-813a-d96ca215a38a","Type":"ContainerDied","Data":"adb1f3f0351d27f1c1710d70e43939a45af281df3b046a6628ecbd6e26cc89e1"} Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.647510 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cls76" event={"ID":"0e1184a0-4654-421e-813a-d96ca215a38a","Type":"ContainerDied","Data":"d45a094ec996765249872fee9b3557c2027d807f7bb518f7c4d10a0668148456"} Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.647533 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d45a094ec996765249872fee9b3557c2027d807f7bb518f7c4d10a0668148456" Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.682165 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.727516 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-catalog-content\") pod \"0e1184a0-4654-421e-813a-d96ca215a38a\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.727584 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-utilities\") pod \"0e1184a0-4654-421e-813a-d96ca215a38a\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.727647 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dkxp\" (UniqueName: \"kubernetes.io/projected/0e1184a0-4654-421e-813a-d96ca215a38a-kube-api-access-5dkxp\") pod \"0e1184a0-4654-421e-813a-d96ca215a38a\" (UID: \"0e1184a0-4654-421e-813a-d96ca215a38a\") " Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.728732 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-utilities" (OuterVolumeSpecName: "utilities") pod "0e1184a0-4654-421e-813a-d96ca215a38a" (UID: "0e1184a0-4654-421e-813a-d96ca215a38a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.735976 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1184a0-4654-421e-813a-d96ca215a38a-kube-api-access-5dkxp" (OuterVolumeSpecName: "kube-api-access-5dkxp") pod "0e1184a0-4654-421e-813a-d96ca215a38a" (UID: "0e1184a0-4654-421e-813a-d96ca215a38a"). InnerVolumeSpecName "kube-api-access-5dkxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.765878 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e1184a0-4654-421e-813a-d96ca215a38a" (UID: "0e1184a0-4654-421e-813a-d96ca215a38a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.829835 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dkxp\" (UniqueName: \"kubernetes.io/projected/0e1184a0-4654-421e-813a-d96ca215a38a-kube-api-access-5dkxp\") on node \"crc\" DevicePath \"\"" Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.829868 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:33:44 crc kubenswrapper[5034]: I0105 22:33:44.829877 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1184a0-4654-421e-813a-d96ca215a38a-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:33:45 crc kubenswrapper[5034]: I0105 22:33:45.653219 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cls76" Jan 05 22:33:45 crc kubenswrapper[5034]: I0105 22:33:45.681320 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cls76"] Jan 05 22:33:45 crc kubenswrapper[5034]: I0105 22:33:45.687612 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cls76"] Jan 05 22:33:45 crc kubenswrapper[5034]: I0105 22:33:45.847208 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1184a0-4654-421e-813a-d96ca215a38a" path="/var/lib/kubelet/pods/0e1184a0-4654-421e-813a-d96ca215a38a/volumes" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.227689 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5dv6c"] Jan 05 22:34:02 crc kubenswrapper[5034]: E0105 22:34:02.228643 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1184a0-4654-421e-813a-d96ca215a38a" containerName="registry-server" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.228659 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1184a0-4654-421e-813a-d96ca215a38a" containerName="registry-server" Jan 05 22:34:02 crc kubenswrapper[5034]: E0105 22:34:02.228677 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1184a0-4654-421e-813a-d96ca215a38a" containerName="extract-utilities" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.228685 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1184a0-4654-421e-813a-d96ca215a38a" containerName="extract-utilities" Jan 05 22:34:02 crc kubenswrapper[5034]: E0105 22:34:02.228710 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1184a0-4654-421e-813a-d96ca215a38a" containerName="extract-content" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.228719 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1184a0-4654-421e-813a-d96ca215a38a" containerName="extract-content" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.228902 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1184a0-4654-421e-813a-d96ca215a38a" containerName="registry-server" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.230266 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.250569 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5dv6c"] Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.283624 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgkkk\" (UniqueName: \"kubernetes.io/projected/efa691c8-5d0e-4acb-9aad-2c766297edaf-kube-api-access-xgkkk\") pod \"community-operators-5dv6c\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.283810 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-catalog-content\") pod \"community-operators-5dv6c\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.283931 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-utilities\") pod \"community-operators-5dv6c\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.385544 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-catalog-content\") pod \"community-operators-5dv6c\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.385616 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-utilities\") pod \"community-operators-5dv6c\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.385677 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgkkk\" (UniqueName: \"kubernetes.io/projected/efa691c8-5d0e-4acb-9aad-2c766297edaf-kube-api-access-xgkkk\") pod \"community-operators-5dv6c\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.386501 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-catalog-content\") pod \"community-operators-5dv6c\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.387422 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-utilities\") pod \"community-operators-5dv6c\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.406416 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgkkk\" (UniqueName: \"kubernetes.io/projected/efa691c8-5d0e-4acb-9aad-2c766297edaf-kube-api-access-xgkkk\") pod \"community-operators-5dv6c\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:02 crc kubenswrapper[5034]: I0105 22:34:02.556096 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:03 crc kubenswrapper[5034]: I0105 22:34:03.191732 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5dv6c"] Jan 05 22:34:03 crc kubenswrapper[5034]: I0105 22:34:03.840988 5034 generic.go:334] "Generic (PLEG): container finished" podID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerID="f7740acea02205e66f64af506f304cbf06186be90cebb42f4877b8b58eacf4e2" exitCode=0 Jan 05 22:34:03 crc kubenswrapper[5034]: I0105 22:34:03.845614 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dv6c" event={"ID":"efa691c8-5d0e-4acb-9aad-2c766297edaf","Type":"ContainerDied","Data":"f7740acea02205e66f64af506f304cbf06186be90cebb42f4877b8b58eacf4e2"} Jan 05 22:34:03 crc kubenswrapper[5034]: I0105 22:34:03.845647 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dv6c" event={"ID":"efa691c8-5d0e-4acb-9aad-2c766297edaf","Type":"ContainerStarted","Data":"fc5004a47b931b8965fc35de964a2811b77d7d161f930395f17e2c835ac5d593"} Jan 05 22:34:04 crc kubenswrapper[5034]: I0105 22:34:04.850013 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dv6c" event={"ID":"efa691c8-5d0e-4acb-9aad-2c766297edaf","Type":"ContainerStarted","Data":"3ee3f9fda37e4b8e1700f89b26ac2c5ba651104e70652838022f58a25b3d3e2c"} Jan 05 22:34:05 crc kubenswrapper[5034]: I0105 22:34:05.867229 5034 generic.go:334] "Generic (PLEG): container finished" podID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerID="3ee3f9fda37e4b8e1700f89b26ac2c5ba651104e70652838022f58a25b3d3e2c" exitCode=0 Jan 05 22:34:05 crc kubenswrapper[5034]: I0105 22:34:05.867880 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dv6c" event={"ID":"efa691c8-5d0e-4acb-9aad-2c766297edaf","Type":"ContainerDied","Data":"3ee3f9fda37e4b8e1700f89b26ac2c5ba651104e70652838022f58a25b3d3e2c"} Jan 05 22:34:06 crc kubenswrapper[5034]: I0105 22:34:06.882936 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dv6c" event={"ID":"efa691c8-5d0e-4acb-9aad-2c766297edaf","Type":"ContainerStarted","Data":"9da48b4c893c19493d42e8652f45207d49051942d8b6dfa121f489b85baccb08"} Jan 05 22:34:06 crc kubenswrapper[5034]: I0105 22:34:06.917640 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5dv6c" podStartSLOduration=2.44439579 podStartE2EDuration="4.917615969s" podCreationTimestamp="2026-01-05 22:34:02 +0000 UTC" firstStartedPulling="2026-01-05 22:34:03.843456439 +0000 UTC m=+2536.215455878" lastFinishedPulling="2026-01-05 22:34:06.316676598 +0000 UTC m=+2538.688676057" observedRunningTime="2026-01-05 22:34:06.903458789 +0000 UTC m=+2539.275458238" watchObservedRunningTime="2026-01-05 22:34:06.917615969 +0000 UTC m=+2539.289615408" Jan 05 22:34:12 crc kubenswrapper[5034]: I0105 22:34:12.556706 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:12 crc kubenswrapper[5034]: I0105 22:34:12.557856 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:12 crc kubenswrapper[5034]: I0105 22:34:12.625346 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:12 crc kubenswrapper[5034]: I0105 22:34:12.976018 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:13 crc kubenswrapper[5034]: I0105 22:34:13.032043 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5dv6c"] Jan 05 22:34:14 crc kubenswrapper[5034]: I0105 22:34:14.946714 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5dv6c" podUID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerName="registry-server" containerID="cri-o://9da48b4c893c19493d42e8652f45207d49051942d8b6dfa121f489b85baccb08" gracePeriod=2 Jan 05 22:34:15 crc kubenswrapper[5034]: I0105 22:34:15.959667 5034 generic.go:334] "Generic (PLEG): container finished" podID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerID="9da48b4c893c19493d42e8652f45207d49051942d8b6dfa121f489b85baccb08" exitCode=0 Jan 05 22:34:15 crc kubenswrapper[5034]: I0105 22:34:15.959777 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dv6c" event={"ID":"efa691c8-5d0e-4acb-9aad-2c766297edaf","Type":"ContainerDied","Data":"9da48b4c893c19493d42e8652f45207d49051942d8b6dfa121f489b85baccb08"} Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.510561 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.569714 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-catalog-content\") pod \"efa691c8-5d0e-4acb-9aad-2c766297edaf\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.570034 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgkkk\" (UniqueName: \"kubernetes.io/projected/efa691c8-5d0e-4acb-9aad-2c766297edaf-kube-api-access-xgkkk\") pod \"efa691c8-5d0e-4acb-9aad-2c766297edaf\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.570069 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-utilities\") pod \"efa691c8-5d0e-4acb-9aad-2c766297edaf\" (UID: \"efa691c8-5d0e-4acb-9aad-2c766297edaf\") " Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.571104 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-utilities" (OuterVolumeSpecName: "utilities") pod "efa691c8-5d0e-4acb-9aad-2c766297edaf" (UID: "efa691c8-5d0e-4acb-9aad-2c766297edaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.578830 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa691c8-5d0e-4acb-9aad-2c766297edaf-kube-api-access-xgkkk" (OuterVolumeSpecName: "kube-api-access-xgkkk") pod "efa691c8-5d0e-4acb-9aad-2c766297edaf" (UID: "efa691c8-5d0e-4acb-9aad-2c766297edaf"). InnerVolumeSpecName "kube-api-access-xgkkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.622562 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efa691c8-5d0e-4acb-9aad-2c766297edaf" (UID: "efa691c8-5d0e-4acb-9aad-2c766297edaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.673243 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgkkk\" (UniqueName: \"kubernetes.io/projected/efa691c8-5d0e-4acb-9aad-2c766297edaf-kube-api-access-xgkkk\") on node \"crc\" DevicePath \"\"" Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.673305 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.673329 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa691c8-5d0e-4acb-9aad-2c766297edaf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.969151 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dv6c" event={"ID":"efa691c8-5d0e-4acb-9aad-2c766297edaf","Type":"ContainerDied","Data":"fc5004a47b931b8965fc35de964a2811b77d7d161f930395f17e2c835ac5d593"} Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.969203 5034 scope.go:117] "RemoveContainer" containerID="9da48b4c893c19493d42e8652f45207d49051942d8b6dfa121f489b85baccb08" Jan 05 22:34:16 crc kubenswrapper[5034]: I0105 22:34:16.969329 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5dv6c" Jan 05 22:34:17 crc kubenswrapper[5034]: I0105 22:34:17.009632 5034 scope.go:117] "RemoveContainer" containerID="3ee3f9fda37e4b8e1700f89b26ac2c5ba651104e70652838022f58a25b3d3e2c" Jan 05 22:34:17 crc kubenswrapper[5034]: I0105 22:34:17.016303 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5dv6c"] Jan 05 22:34:17 crc kubenswrapper[5034]: I0105 22:34:17.022700 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5dv6c"] Jan 05 22:34:17 crc kubenswrapper[5034]: I0105 22:34:17.040207 5034 scope.go:117] "RemoveContainer" containerID="f7740acea02205e66f64af506f304cbf06186be90cebb42f4877b8b58eacf4e2" Jan 05 22:34:17 crc kubenswrapper[5034]: I0105 22:34:17.846694 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa691c8-5d0e-4acb-9aad-2c766297edaf" path="/var/lib/kubelet/pods/efa691c8-5d0e-4acb-9aad-2c766297edaf/volumes" Jan 05 22:35:50 crc kubenswrapper[5034]: I0105 22:35:50.468784 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:35:50 crc kubenswrapper[5034]: I0105 22:35:50.469517 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:36:20 crc kubenswrapper[5034]: I0105 22:36:20.470262 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:36:20 crc kubenswrapper[5034]: I0105 22:36:20.470832 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:36:50 crc kubenswrapper[5034]: I0105 22:36:50.469386 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:36:50 crc kubenswrapper[5034]: I0105 22:36:50.470380 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:36:50 crc kubenswrapper[5034]: I0105 22:36:50.470465 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:36:50 crc kubenswrapper[5034]: I0105 22:36:50.471520 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc5cb06c3c6a2023859862b1c61dde7c65fcef7432c175f100357f4726da8fb5"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:36:50 crc kubenswrapper[5034]: I0105 22:36:50.471592 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://bc5cb06c3c6a2023859862b1c61dde7c65fcef7432c175f100357f4726da8fb5" gracePeriod=600 Jan 05 22:36:51 crc kubenswrapper[5034]: I0105 22:36:51.082875 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="bc5cb06c3c6a2023859862b1c61dde7c65fcef7432c175f100357f4726da8fb5" exitCode=0 Jan 05 22:36:51 crc kubenswrapper[5034]: I0105 22:36:51.082954 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"bc5cb06c3c6a2023859862b1c61dde7c65fcef7432c175f100357f4726da8fb5"} Jan 05 22:36:51 crc kubenswrapper[5034]: I0105 22:36:51.083817 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c"} Jan 05 22:36:51 crc kubenswrapper[5034]: I0105 22:36:51.083841 5034 scope.go:117] "RemoveContainer" containerID="72550c32db5971a7fe91cafd213029f99b57838ed71a003ac44854e0c53a1d2e" Jan 05 22:38:50 crc kubenswrapper[5034]: I0105 22:38:50.469133 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:38:50 crc kubenswrapper[5034]: I0105 22:38:50.469745 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.276877 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4k7q9"] Jan 05 22:39:18 crc kubenswrapper[5034]: E0105 22:39:18.279026 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerName="extract-utilities" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.279173 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerName="extract-utilities" Jan 05 22:39:18 crc kubenswrapper[5034]: E0105 22:39:18.279319 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerName="registry-server" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.279404 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerName="registry-server" Jan 05 22:39:18 crc kubenswrapper[5034]: E0105 22:39:18.279486 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerName="extract-content" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.279569 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerName="extract-content" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.279837 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa691c8-5d0e-4acb-9aad-2c766297edaf" containerName="registry-server" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.281209 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.295192 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k7q9"] Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.383249 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6nql\" (UniqueName: \"kubernetes.io/projected/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-kube-api-access-l6nql\") pod \"certified-operators-4k7q9\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.383334 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-catalog-content\") pod \"certified-operators-4k7q9\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.383405 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-utilities\") pod \"certified-operators-4k7q9\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.485223 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6nql\" (UniqueName: \"kubernetes.io/projected/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-kube-api-access-l6nql\") pod \"certified-operators-4k7q9\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.485311 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-catalog-content\") pod \"certified-operators-4k7q9\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.485338 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-utilities\") pod \"certified-operators-4k7q9\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.485885 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-catalog-content\") pod \"certified-operators-4k7q9\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.485932 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-utilities\") pod \"certified-operators-4k7q9\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.508675 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6nql\" (UniqueName: \"kubernetes.io/projected/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-kube-api-access-l6nql\") pod \"certified-operators-4k7q9\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:18 crc kubenswrapper[5034]: I0105 22:39:18.601922 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:19 crc kubenswrapper[5034]: I0105 22:39:19.169844 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k7q9"] Jan 05 22:39:20 crc kubenswrapper[5034]: I0105 22:39:20.131166 5034 generic.go:334] "Generic (PLEG): container finished" podID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerID="9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a" exitCode=0 Jan 05 22:39:20 crc kubenswrapper[5034]: I0105 22:39:20.131255 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k7q9" event={"ID":"3c522fb5-f4b8-4199-9a73-3e371b45ea2a","Type":"ContainerDied","Data":"9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a"} Jan 05 22:39:20 crc kubenswrapper[5034]: I0105 22:39:20.131327 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k7q9" event={"ID":"3c522fb5-f4b8-4199-9a73-3e371b45ea2a","Type":"ContainerStarted","Data":"f6b675a0da0669d253ce202e60a95ceb58a23383ad73f572260f94b5d736422d"} Jan 05 22:39:20 crc kubenswrapper[5034]: I0105 22:39:20.135145 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:39:20 crc kubenswrapper[5034]: I0105 22:39:20.469306 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:39:20 crc kubenswrapper[5034]: I0105 22:39:20.470015 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:39:21 crc kubenswrapper[5034]: I0105 22:39:21.142165 5034 generic.go:334] "Generic (PLEG): container finished" podID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerID="7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243" exitCode=0 Jan 05 22:39:21 crc kubenswrapper[5034]: I0105 22:39:21.142341 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k7q9" event={"ID":"3c522fb5-f4b8-4199-9a73-3e371b45ea2a","Type":"ContainerDied","Data":"7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243"} Jan 05 22:39:22 crc kubenswrapper[5034]: I0105 22:39:22.162178 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k7q9" event={"ID":"3c522fb5-f4b8-4199-9a73-3e371b45ea2a","Type":"ContainerStarted","Data":"0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da"} Jan 05 22:39:22 crc kubenswrapper[5034]: I0105 22:39:22.189332 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4k7q9" podStartSLOduration=2.703966479 podStartE2EDuration="4.189316527s" podCreationTimestamp="2026-01-05 22:39:18 +0000 UTC" firstStartedPulling="2026-01-05 22:39:20.134842455 +0000 UTC m=+2852.506841894" lastFinishedPulling="2026-01-05 22:39:21.620192493 +0000 UTC m=+2853.992191942" observedRunningTime="2026-01-05 22:39:22.186468247 +0000 UTC m=+2854.558467686" watchObservedRunningTime="2026-01-05 22:39:22.189316527 +0000 UTC m=+2854.561315966" Jan 05 22:39:28 crc kubenswrapper[5034]: I0105 22:39:28.602958 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:28 crc kubenswrapper[5034]: I0105 22:39:28.603579 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:28 crc kubenswrapper[5034]: I0105 22:39:28.648000 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:29 crc kubenswrapper[5034]: I0105 22:39:29.258416 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:29 crc kubenswrapper[5034]: I0105 22:39:29.308874 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k7q9"] Jan 05 22:39:31 crc kubenswrapper[5034]: I0105 22:39:31.241097 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4k7q9" podUID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerName="registry-server" containerID="cri-o://0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da" gracePeriod=2 Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.150951 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.248849 5034 generic.go:334] "Generic (PLEG): container finished" podID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerID="0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da" exitCode=0 Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.248897 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k7q9" event={"ID":"3c522fb5-f4b8-4199-9a73-3e371b45ea2a","Type":"ContainerDied","Data":"0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da"} Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.248932 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k7q9" event={"ID":"3c522fb5-f4b8-4199-9a73-3e371b45ea2a","Type":"ContainerDied","Data":"f6b675a0da0669d253ce202e60a95ceb58a23383ad73f572260f94b5d736422d"} Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.248941 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k7q9" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.248950 5034 scope.go:117] "RemoveContainer" containerID="0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.267778 5034 scope.go:117] "RemoveContainer" containerID="7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.292428 5034 scope.go:117] "RemoveContainer" containerID="9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.309357 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-catalog-content\") pod \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.309502 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-utilities\") pod \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.309534 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6nql\" (UniqueName: \"kubernetes.io/projected/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-kube-api-access-l6nql\") pod \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\" (UID: \"3c522fb5-f4b8-4199-9a73-3e371b45ea2a\") " Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.311127 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-utilities" (OuterVolumeSpecName: "utilities") pod "3c522fb5-f4b8-4199-9a73-3e371b45ea2a" (UID: "3c522fb5-f4b8-4199-9a73-3e371b45ea2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.315055 5034 scope.go:117] "RemoveContainer" containerID="0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da" Jan 05 22:39:32 crc kubenswrapper[5034]: E0105 22:39:32.315879 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da\": container with ID starting with 0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da not found: ID does not exist" containerID="0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.315944 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da"} err="failed to get container status \"0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da\": rpc error: code = NotFound desc = could not find container \"0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da\": container with ID starting with 0977a5161ddfcc478912ae8bcebb0d354970dbdf50a476232dcbcc606e7972da not found: ID does not exist" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.315978 5034 scope.go:117] "RemoveContainer" containerID="7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.316252 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-kube-api-access-l6nql" (OuterVolumeSpecName: "kube-api-access-l6nql") pod "3c522fb5-f4b8-4199-9a73-3e371b45ea2a" (UID: "3c522fb5-f4b8-4199-9a73-3e371b45ea2a"). InnerVolumeSpecName "kube-api-access-l6nql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:39:32 crc kubenswrapper[5034]: E0105 22:39:32.316339 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243\": container with ID starting with 7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243 not found: ID does not exist" containerID="7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.316370 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243"} err="failed to get container status \"7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243\": rpc error: code = NotFound desc = could not find container \"7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243\": container with ID starting with 7f997f7da224820c94870a2088ba67757977a33e2d62e9436d99ac52088de243 not found: ID does not exist" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.316398 5034 scope.go:117] "RemoveContainer" containerID="9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a" Jan 05 22:39:32 crc kubenswrapper[5034]: E0105 22:39:32.316791 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a\": container with ID starting with 9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a not found: ID does not exist" containerID="9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.316819 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a"} err="failed to get container status \"9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a\": rpc error: code = NotFound desc = could not find container \"9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a\": container with ID starting with 9737b56c48448dceca5fbeceddb367e547fbc845efe89fb82998be59e439804a not found: ID does not exist" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.361485 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c522fb5-f4b8-4199-9a73-3e371b45ea2a" (UID: "3c522fb5-f4b8-4199-9a73-3e371b45ea2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.411610 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.411648 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.411660 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6nql\" (UniqueName: \"kubernetes.io/projected/3c522fb5-f4b8-4199-9a73-3e371b45ea2a-kube-api-access-l6nql\") on node \"crc\" DevicePath \"\"" Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.581669 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k7q9"] Jan 05 22:39:32 crc kubenswrapper[5034]: I0105 22:39:32.587498 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4k7q9"] Jan 05 22:39:33 crc kubenswrapper[5034]: I0105 22:39:33.851761 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" path="/var/lib/kubelet/pods/3c522fb5-f4b8-4199-9a73-3e371b45ea2a/volumes" Jan 05 22:39:50 crc kubenswrapper[5034]: I0105 22:39:50.468426 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:39:50 crc kubenswrapper[5034]: I0105 22:39:50.468899 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:39:50 crc kubenswrapper[5034]: I0105 22:39:50.468937 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:39:50 crc kubenswrapper[5034]: I0105 22:39:50.469533 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:39:50 crc kubenswrapper[5034]: I0105 22:39:50.469584 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" gracePeriod=600 Jan 05 22:39:50 crc kubenswrapper[5034]: E0105 22:39:50.593893 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:39:51 crc kubenswrapper[5034]: I0105 22:39:51.409646 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" exitCode=0 Jan 05 22:39:51 crc kubenswrapper[5034]: I0105 22:39:51.409691 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c"} Jan 05 22:39:51 crc kubenswrapper[5034]: I0105 22:39:51.409767 5034 scope.go:117] "RemoveContainer" containerID="bc5cb06c3c6a2023859862b1c61dde7c65fcef7432c175f100357f4726da8fb5" Jan 05 22:39:51 crc kubenswrapper[5034]: I0105 22:39:51.410655 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:39:51 crc kubenswrapper[5034]: E0105 22:39:51.410923 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:39:57 crc kubenswrapper[5034]: I0105 22:39:57.297804 5034 scope.go:117] "RemoveContainer" containerID="08f758d2f7d3242a9cab39166cd7f687b37bff678d27d2611d85aa01a500d962" Jan 05 22:39:57 crc kubenswrapper[5034]: I0105 22:39:57.327004 5034 scope.go:117] "RemoveContainer" containerID="3091ed3da41109a842c8fc8cfa2266a5a4e9a3d07db73e01a375410fe14614e6" Jan 05 22:39:57 crc kubenswrapper[5034]: I0105 22:39:57.356283 5034 scope.go:117] "RemoveContainer" containerID="adb1f3f0351d27f1c1710d70e43939a45af281df3b046a6628ecbd6e26cc89e1" Jan 05 22:40:03 crc kubenswrapper[5034]: I0105 22:40:03.839567 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:40:03 crc kubenswrapper[5034]: E0105 22:40:03.840547 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:40:06 crc kubenswrapper[5034]: E0105 22:40:06.941696 5034 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 05 22:40:17 crc kubenswrapper[5034]: I0105 22:40:17.842978 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:40:17 crc kubenswrapper[5034]: E0105 22:40:17.843959 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:40:28 crc kubenswrapper[5034]: I0105 22:40:28.838275 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:40:28 crc kubenswrapper[5034]: E0105 22:40:28.839313 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:40:42 crc kubenswrapper[5034]: I0105 22:40:42.838328 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:40:42 crc kubenswrapper[5034]: E0105 22:40:42.839040 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:40:56 crc kubenswrapper[5034]: I0105 22:40:56.838970 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:40:56 crc kubenswrapper[5034]: E0105 22:40:56.839824 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:41:08 crc kubenswrapper[5034]: I0105 22:41:08.839118 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:41:08 crc kubenswrapper[5034]: E0105 22:41:08.840196 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:41:21 crc kubenswrapper[5034]: I0105 22:41:21.838242 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:41:21 crc kubenswrapper[5034]: E0105 22:41:21.839021 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:41:33 crc kubenswrapper[5034]: I0105 22:41:33.839565 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:41:33 crc kubenswrapper[5034]: E0105 22:41:33.841617 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:41:44 crc kubenswrapper[5034]: I0105 22:41:44.839203 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:41:44 crc kubenswrapper[5034]: E0105 22:41:44.840677 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:41:57 crc kubenswrapper[5034]: I0105 22:41:57.849205 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:41:57 crc kubenswrapper[5034]: E0105 22:41:57.850103 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:42:10 crc kubenswrapper[5034]: I0105 22:42:10.839143 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:42:10 crc kubenswrapper[5034]: E0105 22:42:10.840387 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:42:22 crc kubenswrapper[5034]: I0105 22:42:22.838527 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:42:22 crc kubenswrapper[5034]: E0105 22:42:22.839348 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:42:36 crc kubenswrapper[5034]: I0105 22:42:36.838540 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:42:36 crc kubenswrapper[5034]: E0105 22:42:36.839797 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:42:48 crc kubenswrapper[5034]: I0105 22:42:48.838822 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:42:48 crc kubenswrapper[5034]: E0105 22:42:48.839734 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:43:01 crc kubenswrapper[5034]: I0105 22:43:01.838715 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:43:01 crc kubenswrapper[5034]: E0105 22:43:01.839514 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:43:14 crc kubenswrapper[5034]: I0105 22:43:14.839136 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:43:14 crc kubenswrapper[5034]: E0105 22:43:14.840375 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:43:29 crc kubenswrapper[5034]: I0105 22:43:29.838424 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:43:29 crc kubenswrapper[5034]: E0105 22:43:29.839248 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:43:40 crc kubenswrapper[5034]: I0105 22:43:40.838805 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:43:40 crc kubenswrapper[5034]: E0105 22:43:40.840438 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:43:53 crc kubenswrapper[5034]: I0105 22:43:53.838367 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:43:53 crc kubenswrapper[5034]: E0105 22:43:53.839202 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:44:08 crc kubenswrapper[5034]: I0105 22:44:08.838675 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:44:08 crc kubenswrapper[5034]: E0105 22:44:08.839618 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:44:12 crc kubenswrapper[5034]: I0105 22:44:12.667532 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4rpvp"] Jan 05 22:44:12 crc kubenswrapper[5034]: E0105 22:44:12.668993 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerName="extract-content" Jan 05 22:44:12 crc kubenswrapper[5034]: I0105 22:44:12.669016 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerName="extract-content" Jan 05 22:44:12 crc kubenswrapper[5034]: E0105 22:44:12.669048 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerName="extract-utilities" Jan 05 22:44:12 crc kubenswrapper[5034]: I0105 22:44:12.669063 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerName="extract-utilities" Jan 05 22:44:12 crc kubenswrapper[5034]: E0105 22:44:12.669114 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerName="registry-server" Jan 05 22:44:12 crc kubenswrapper[5034]: I0105 22:44:12.669129 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerName="registry-server" Jan 05 22:44:12 crc kubenswrapper[5034]: I0105 22:44:12.669396 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c522fb5-f4b8-4199-9a73-3e371b45ea2a" containerName="registry-server" Jan 05 22:44:12 crc kubenswrapper[5034]: I0105 22:44:12.671689 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:12 crc kubenswrapper[5034]: I0105 22:44:12.682690 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rpvp"] Jan 05 22:44:13 crc kubenswrapper[5034]: I0105 22:44:13.518598 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-utilities\") pod \"redhat-marketplace-4rpvp\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:13 crc kubenswrapper[5034]: I0105 22:44:13.518680 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-catalog-content\") pod \"redhat-marketplace-4rpvp\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:13 crc kubenswrapper[5034]: I0105 22:44:13.518842 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7cr9\" (UniqueName: \"kubernetes.io/projected/01e85b92-746c-4647-9f76-7b2510af0a35-kube-api-access-r7cr9\") pod \"redhat-marketplace-4rpvp\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:13 crc kubenswrapper[5034]: I0105 22:44:13.620195 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7cr9\" (UniqueName: \"kubernetes.io/projected/01e85b92-746c-4647-9f76-7b2510af0a35-kube-api-access-r7cr9\") pod \"redhat-marketplace-4rpvp\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:13 crc kubenswrapper[5034]: I0105 22:44:13.620572 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-utilities\") pod \"redhat-marketplace-4rpvp\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:13 crc kubenswrapper[5034]: I0105 22:44:13.620679 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-catalog-content\") pod \"redhat-marketplace-4rpvp\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:13 crc kubenswrapper[5034]: I0105 22:44:13.621221 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-utilities\") pod \"redhat-marketplace-4rpvp\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:13 crc kubenswrapper[5034]: I0105 22:44:13.621263 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-catalog-content\") pod \"redhat-marketplace-4rpvp\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:13 crc kubenswrapper[5034]: I0105 22:44:13.655320 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7cr9\" (UniqueName: \"kubernetes.io/projected/01e85b92-746c-4647-9f76-7b2510af0a35-kube-api-access-r7cr9\") pod \"redhat-marketplace-4rpvp\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:13 crc kubenswrapper[5034]: I0105 22:44:13.901807 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:14 crc kubenswrapper[5034]: I0105 22:44:14.383840 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rpvp"] Jan 05 22:44:14 crc kubenswrapper[5034]: W0105 22:44:14.396709 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01e85b92_746c_4647_9f76_7b2510af0a35.slice/crio-db709bbc9e8e6f8fa3f841832e96b28bd6e7644ec381546bdc00a25bb344b212 WatchSource:0}: Error finding container db709bbc9e8e6f8fa3f841832e96b28bd6e7644ec381546bdc00a25bb344b212: Status 404 returned error can't find the container with id db709bbc9e8e6f8fa3f841832e96b28bd6e7644ec381546bdc00a25bb344b212 Jan 05 22:44:14 crc kubenswrapper[5034]: I0105 22:44:14.661760 5034 generic.go:334] "Generic (PLEG): container finished" podID="01e85b92-746c-4647-9f76-7b2510af0a35" containerID="dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e" exitCode=0 Jan 05 22:44:14 crc kubenswrapper[5034]: I0105 22:44:14.661832 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rpvp" event={"ID":"01e85b92-746c-4647-9f76-7b2510af0a35","Type":"ContainerDied","Data":"dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e"} Jan 05 22:44:14 crc kubenswrapper[5034]: I0105 22:44:14.661870 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rpvp" event={"ID":"01e85b92-746c-4647-9f76-7b2510af0a35","Type":"ContainerStarted","Data":"db709bbc9e8e6f8fa3f841832e96b28bd6e7644ec381546bdc00a25bb344b212"} Jan 05 22:44:15 crc kubenswrapper[5034]: I0105 22:44:15.676044 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rpvp" event={"ID":"01e85b92-746c-4647-9f76-7b2510af0a35","Type":"ContainerStarted","Data":"82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3"} Jan 05 22:44:16 crc kubenswrapper[5034]: I0105 22:44:16.689605 5034 generic.go:334] "Generic (PLEG): container finished" podID="01e85b92-746c-4647-9f76-7b2510af0a35" containerID="82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3" exitCode=0 Jan 05 22:44:16 crc kubenswrapper[5034]: I0105 22:44:16.689663 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rpvp" event={"ID":"01e85b92-746c-4647-9f76-7b2510af0a35","Type":"ContainerDied","Data":"82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3"} Jan 05 22:44:17 crc kubenswrapper[5034]: I0105 22:44:17.700571 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rpvp" event={"ID":"01e85b92-746c-4647-9f76-7b2510af0a35","Type":"ContainerStarted","Data":"2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe"} Jan 05 22:44:17 crc kubenswrapper[5034]: I0105 22:44:17.728998 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4rpvp" podStartSLOduration=3.134268612 podStartE2EDuration="5.728956093s" podCreationTimestamp="2026-01-05 22:44:12 +0000 UTC" firstStartedPulling="2026-01-05 22:44:14.663746531 +0000 UTC m=+3147.035745970" lastFinishedPulling="2026-01-05 22:44:17.258433982 +0000 UTC m=+3149.630433451" observedRunningTime="2026-01-05 22:44:17.724876938 +0000 UTC m=+3150.096876387" watchObservedRunningTime="2026-01-05 22:44:17.728956093 +0000 UTC m=+3150.100955582" Jan 05 22:44:20 crc kubenswrapper[5034]: I0105 22:44:20.839028 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:44:20 crc kubenswrapper[5034]: E0105 22:44:20.839569 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:44:23 crc kubenswrapper[5034]: I0105 22:44:23.901914 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:23 crc kubenswrapper[5034]: I0105 22:44:23.902505 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:23 crc kubenswrapper[5034]: I0105 22:44:23.955333 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:24 crc kubenswrapper[5034]: I0105 22:44:24.794734 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:24 crc kubenswrapper[5034]: I0105 22:44:24.840163 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rpvp"] Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.614951 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9ddl8"] Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.617529 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.638497 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-utilities\") pod \"community-operators-9ddl8\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.638550 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-catalog-content\") pod \"community-operators-9ddl8\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.638615 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fsll\" (UniqueName: \"kubernetes.io/projected/8823b65c-6277-4ad7-9d77-0981236266f6-kube-api-access-4fsll\") pod \"community-operators-9ddl8\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.639422 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ddl8"] Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.740672 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-utilities\") pod \"community-operators-9ddl8\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.740717 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-catalog-content\") pod \"community-operators-9ddl8\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.740839 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fsll\" (UniqueName: \"kubernetes.io/projected/8823b65c-6277-4ad7-9d77-0981236266f6-kube-api-access-4fsll\") pod \"community-operators-9ddl8\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.755544 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-utilities\") pod \"community-operators-9ddl8\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.757590 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-catalog-content\") pod \"community-operators-9ddl8\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.770135 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fsll\" (UniqueName: \"kubernetes.io/projected/8823b65c-6277-4ad7-9d77-0981236266f6-kube-api-access-4fsll\") pod \"community-operators-9ddl8\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.770899 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4rpvp" podUID="01e85b92-746c-4647-9f76-7b2510af0a35" containerName="registry-server" containerID="cri-o://2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe" gracePeriod=2 Jan 05 22:44:26 crc kubenswrapper[5034]: I0105 22:44:26.944850 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.219199 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.283556 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ddl8"] Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.352791 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7cr9\" (UniqueName: \"kubernetes.io/projected/01e85b92-746c-4647-9f76-7b2510af0a35-kube-api-access-r7cr9\") pod \"01e85b92-746c-4647-9f76-7b2510af0a35\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.353126 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-utilities\") pod \"01e85b92-746c-4647-9f76-7b2510af0a35\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.354239 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-catalog-content\") pod \"01e85b92-746c-4647-9f76-7b2510af0a35\" (UID: \"01e85b92-746c-4647-9f76-7b2510af0a35\") " Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.354166 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-utilities" (OuterVolumeSpecName: "utilities") pod "01e85b92-746c-4647-9f76-7b2510af0a35" (UID: "01e85b92-746c-4647-9f76-7b2510af0a35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.356021 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.359723 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e85b92-746c-4647-9f76-7b2510af0a35-kube-api-access-r7cr9" (OuterVolumeSpecName: "kube-api-access-r7cr9") pod "01e85b92-746c-4647-9f76-7b2510af0a35" (UID: "01e85b92-746c-4647-9f76-7b2510af0a35"). InnerVolumeSpecName "kube-api-access-r7cr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.384378 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01e85b92-746c-4647-9f76-7b2510af0a35" (UID: "01e85b92-746c-4647-9f76-7b2510af0a35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.457993 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e85b92-746c-4647-9f76-7b2510af0a35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.458406 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7cr9\" (UniqueName: \"kubernetes.io/projected/01e85b92-746c-4647-9f76-7b2510af0a35-kube-api-access-r7cr9\") on node \"crc\" DevicePath \"\"" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.781375 5034 generic.go:334] "Generic (PLEG): container finished" podID="01e85b92-746c-4647-9f76-7b2510af0a35" containerID="2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe" exitCode=0 Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.781447 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rpvp" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.781537 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rpvp" event={"ID":"01e85b92-746c-4647-9f76-7b2510af0a35","Type":"ContainerDied","Data":"2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe"} Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.781588 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rpvp" event={"ID":"01e85b92-746c-4647-9f76-7b2510af0a35","Type":"ContainerDied","Data":"db709bbc9e8e6f8fa3f841832e96b28bd6e7644ec381546bdc00a25bb344b212"} Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.781613 5034 scope.go:117] "RemoveContainer" containerID="2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.783661 5034 generic.go:334] "Generic (PLEG): container finished" podID="8823b65c-6277-4ad7-9d77-0981236266f6" containerID="23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008" exitCode=0 Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.783703 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ddl8" event={"ID":"8823b65c-6277-4ad7-9d77-0981236266f6","Type":"ContainerDied","Data":"23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008"} Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.783732 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ddl8" event={"ID":"8823b65c-6277-4ad7-9d77-0981236266f6","Type":"ContainerStarted","Data":"0ac984d5f21c9dccec9324c066c9e6d0465b6ba1d4e0211e3f78dd433ac59c81"} Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.785874 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.820905 5034 scope.go:117] "RemoveContainer" containerID="82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.837723 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rpvp"] Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.853153 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rpvp"] Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.863471 5034 scope.go:117] "RemoveContainer" containerID="dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.889536 5034 scope.go:117] "RemoveContainer" containerID="2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe" Jan 05 22:44:27 crc kubenswrapper[5034]: E0105 22:44:27.890095 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe\": container with ID starting with 2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe not found: ID does not exist" containerID="2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.890169 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe"} err="failed to get container status \"2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe\": rpc error: code = NotFound desc = could not find container \"2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe\": container with ID starting with 2a45584761452d7e2501ba8fcca2d41fa70d7c476b53981abc4e0e08c0a0cbbe not found: ID does not exist" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.890220 5034 scope.go:117] "RemoveContainer" containerID="82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3" Jan 05 22:44:27 crc kubenswrapper[5034]: E0105 22:44:27.891004 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3\": container with ID starting with 82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3 not found: ID does not exist" containerID="82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.891052 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3"} err="failed to get container status \"82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3\": rpc error: code = NotFound desc = could not find container \"82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3\": container with ID starting with 82bef2a1b0c1c965764276fdefc9caf929a1f289aa7c18fad61e2a2f3839d3f3 not found: ID does not exist" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.891099 5034 scope.go:117] "RemoveContainer" containerID="dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e" Jan 05 22:44:27 crc kubenswrapper[5034]: E0105 22:44:27.891712 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e\": container with ID starting with dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e not found: ID does not exist" containerID="dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e" Jan 05 22:44:27 crc kubenswrapper[5034]: I0105 22:44:27.891747 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e"} err="failed to get container status \"dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e\": rpc error: code = NotFound desc = could not find container \"dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e\": container with ID starting with dda1d451f513f3910cee6d070ba161214068018ee412c02631852b53f568152e not found: ID does not exist" Jan 05 22:44:28 crc kubenswrapper[5034]: I0105 22:44:28.793961 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ddl8" event={"ID":"8823b65c-6277-4ad7-9d77-0981236266f6","Type":"ContainerStarted","Data":"8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77"} Jan 05 22:44:29 crc kubenswrapper[5034]: I0105 22:44:29.803549 5034 generic.go:334] "Generic (PLEG): container finished" podID="8823b65c-6277-4ad7-9d77-0981236266f6" containerID="8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77" exitCode=0 Jan 05 22:44:29 crc kubenswrapper[5034]: I0105 22:44:29.803593 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ddl8" event={"ID":"8823b65c-6277-4ad7-9d77-0981236266f6","Type":"ContainerDied","Data":"8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77"} Jan 05 22:44:29 crc kubenswrapper[5034]: I0105 22:44:29.847566 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e85b92-746c-4647-9f76-7b2510af0a35" path="/var/lib/kubelet/pods/01e85b92-746c-4647-9f76-7b2510af0a35/volumes" Jan 05 22:44:30 crc kubenswrapper[5034]: I0105 22:44:30.814315 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ddl8" event={"ID":"8823b65c-6277-4ad7-9d77-0981236266f6","Type":"ContainerStarted","Data":"c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6"} Jan 05 22:44:30 crc kubenswrapper[5034]: I0105 22:44:30.833159 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9ddl8" podStartSLOduration=2.368506711 podStartE2EDuration="4.83313739s" podCreationTimestamp="2026-01-05 22:44:26 +0000 UTC" firstStartedPulling="2026-01-05 22:44:27.785621758 +0000 UTC m=+3160.157621207" lastFinishedPulling="2026-01-05 22:44:30.250252447 +0000 UTC m=+3162.622251886" observedRunningTime="2026-01-05 22:44:30.830698541 +0000 UTC m=+3163.202697980" watchObservedRunningTime="2026-01-05 22:44:30.83313739 +0000 UTC m=+3163.205136829" Jan 05 22:44:31 crc kubenswrapper[5034]: I0105 22:44:31.838100 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:44:31 crc kubenswrapper[5034]: E0105 22:44:31.838398 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:44:36 crc kubenswrapper[5034]: I0105 22:44:36.945789 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:36 crc kubenswrapper[5034]: I0105 22:44:36.946441 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:37 crc kubenswrapper[5034]: I0105 22:44:37.029379 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:37 crc kubenswrapper[5034]: I0105 22:44:37.924313 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:37 crc kubenswrapper[5034]: I0105 22:44:37.997301 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ddl8"] Jan 05 22:44:39 crc kubenswrapper[5034]: I0105 22:44:39.892953 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9ddl8" podUID="8823b65c-6277-4ad7-9d77-0981236266f6" containerName="registry-server" containerID="cri-o://c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6" gracePeriod=2 Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.811903 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.908172 5034 generic.go:334] "Generic (PLEG): container finished" podID="8823b65c-6277-4ad7-9d77-0981236266f6" containerID="c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6" exitCode=0 Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.908239 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ddl8" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.908276 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ddl8" event={"ID":"8823b65c-6277-4ad7-9d77-0981236266f6","Type":"ContainerDied","Data":"c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6"} Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.908505 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ddl8" event={"ID":"8823b65c-6277-4ad7-9d77-0981236266f6","Type":"ContainerDied","Data":"0ac984d5f21c9dccec9324c066c9e6d0465b6ba1d4e0211e3f78dd433ac59c81"} Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.908536 5034 scope.go:117] "RemoveContainer" containerID="c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.935102 5034 scope.go:117] "RemoveContainer" containerID="8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.959765 5034 scope.go:117] "RemoveContainer" containerID="23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.985777 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-catalog-content\") pod \"8823b65c-6277-4ad7-9d77-0981236266f6\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.986122 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fsll\" (UniqueName: \"kubernetes.io/projected/8823b65c-6277-4ad7-9d77-0981236266f6-kube-api-access-4fsll\") pod \"8823b65c-6277-4ad7-9d77-0981236266f6\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.986169 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-utilities\") pod \"8823b65c-6277-4ad7-9d77-0981236266f6\" (UID: \"8823b65c-6277-4ad7-9d77-0981236266f6\") " Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.987110 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-utilities" (OuterVolumeSpecName: "utilities") pod "8823b65c-6277-4ad7-9d77-0981236266f6" (UID: "8823b65c-6277-4ad7-9d77-0981236266f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.989538 5034 scope.go:117] "RemoveContainer" containerID="c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6" Jan 05 22:44:40 crc kubenswrapper[5034]: E0105 22:44:40.990245 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6\": container with ID starting with c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6 not found: ID does not exist" containerID="c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.990280 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6"} err="failed to get container status \"c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6\": rpc error: code = NotFound desc = could not find container \"c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6\": container with ID starting with c0eabd9ccd323adfb008ad2e4c734cce7a67fb47c81dc5e632cbe3d5607600e6 not found: ID does not exist" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.990303 5034 scope.go:117] "RemoveContainer" containerID="8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77" Jan 05 22:44:40 crc kubenswrapper[5034]: E0105 22:44:40.990817 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77\": container with ID starting with 8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77 not found: ID does not exist" containerID="8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.990844 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77"} err="failed to get container status \"8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77\": rpc error: code = NotFound desc = could not find container \"8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77\": container with ID starting with 8f6e8aca43ce7353d627ccda2283a6ad6976261ab7286470a52f6ff574221e77 not found: ID does not exist" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.990859 5034 scope.go:117] "RemoveContainer" containerID="23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008" Jan 05 22:44:40 crc kubenswrapper[5034]: E0105 22:44:40.991162 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008\": container with ID starting with 23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008 not found: ID does not exist" containerID="23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.991192 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008"} err="failed to get container status \"23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008\": rpc error: code = NotFound desc = could not find container \"23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008\": container with ID starting with 23feb29c1c258e9f4b31a413c15b3885aa9f19296b41c4850b94bcbd66974008 not found: ID does not exist" Jan 05 22:44:40 crc kubenswrapper[5034]: I0105 22:44:40.993844 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8823b65c-6277-4ad7-9d77-0981236266f6-kube-api-access-4fsll" (OuterVolumeSpecName: "kube-api-access-4fsll") pod "8823b65c-6277-4ad7-9d77-0981236266f6" (UID: "8823b65c-6277-4ad7-9d77-0981236266f6"). InnerVolumeSpecName "kube-api-access-4fsll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:44:41 crc kubenswrapper[5034]: I0105 22:44:41.049486 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8823b65c-6277-4ad7-9d77-0981236266f6" (UID: "8823b65c-6277-4ad7-9d77-0981236266f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:44:41 crc kubenswrapper[5034]: I0105 22:44:41.088192 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:44:41 crc kubenswrapper[5034]: I0105 22:44:41.088235 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fsll\" (UniqueName: \"kubernetes.io/projected/8823b65c-6277-4ad7-9d77-0981236266f6-kube-api-access-4fsll\") on node \"crc\" DevicePath \"\"" Jan 05 22:44:41 crc kubenswrapper[5034]: I0105 22:44:41.088252 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8823b65c-6277-4ad7-9d77-0981236266f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:44:41 crc kubenswrapper[5034]: I0105 22:44:41.250400 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ddl8"] Jan 05 22:44:41 crc kubenswrapper[5034]: I0105 22:44:41.259158 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9ddl8"] Jan 05 22:44:41 crc kubenswrapper[5034]: I0105 22:44:41.854016 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8823b65c-6277-4ad7-9d77-0981236266f6" path="/var/lib/kubelet/pods/8823b65c-6277-4ad7-9d77-0981236266f6/volumes" Jan 05 22:44:45 crc kubenswrapper[5034]: I0105 22:44:45.838875 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:44:45 crc kubenswrapper[5034]: E0105 22:44:45.839505 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:44:56 crc kubenswrapper[5034]: I0105 22:44:56.839369 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:44:58 crc kubenswrapper[5034]: I0105 22:44:58.057065 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"dd85f882cd163104b4d356784fdce6e5f1141075a549f0132eb14e28e98a9c1b"} Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.162540 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x"] Jan 05 22:45:00 crc kubenswrapper[5034]: E0105 22:45:00.164869 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e85b92-746c-4647-9f76-7b2510af0a35" containerName="extract-utilities" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.164894 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e85b92-746c-4647-9f76-7b2510af0a35" containerName="extract-utilities" Jan 05 22:45:00 crc kubenswrapper[5034]: E0105 22:45:00.164930 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8823b65c-6277-4ad7-9d77-0981236266f6" containerName="registry-server" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.164938 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8823b65c-6277-4ad7-9d77-0981236266f6" containerName="registry-server" Jan 05 22:45:00 crc kubenswrapper[5034]: E0105 22:45:00.164950 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e85b92-746c-4647-9f76-7b2510af0a35" containerName="registry-server" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.164958 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e85b92-746c-4647-9f76-7b2510af0a35" containerName="registry-server" Jan 05 22:45:00 crc kubenswrapper[5034]: E0105 22:45:00.164978 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8823b65c-6277-4ad7-9d77-0981236266f6" containerName="extract-content" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.164985 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8823b65c-6277-4ad7-9d77-0981236266f6" containerName="extract-content" Jan 05 22:45:00 crc kubenswrapper[5034]: E0105 22:45:00.164998 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e85b92-746c-4647-9f76-7b2510af0a35" containerName="extract-content" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.165005 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e85b92-746c-4647-9f76-7b2510af0a35" containerName="extract-content" Jan 05 22:45:00 crc kubenswrapper[5034]: E0105 22:45:00.165018 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8823b65c-6277-4ad7-9d77-0981236266f6" containerName="extract-utilities" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.165025 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8823b65c-6277-4ad7-9d77-0981236266f6" containerName="extract-utilities" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.165206 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e85b92-746c-4647-9f76-7b2510af0a35" containerName="registry-server" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.165233 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8823b65c-6277-4ad7-9d77-0981236266f6" containerName="registry-server" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.167156 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.174904 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.175670 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.201762 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x"] Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.295962 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/340556b0-d1f9-4e04-bd8e-cd2973e35c37-secret-volume\") pod \"collect-profiles-29460885-kc52x\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.296075 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc5mh\" (UniqueName: \"kubernetes.io/projected/340556b0-d1f9-4e04-bd8e-cd2973e35c37-kube-api-access-wc5mh\") pod \"collect-profiles-29460885-kc52x\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.296130 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340556b0-d1f9-4e04-bd8e-cd2973e35c37-config-volume\") pod \"collect-profiles-29460885-kc52x\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.398007 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc5mh\" (UniqueName: \"kubernetes.io/projected/340556b0-d1f9-4e04-bd8e-cd2973e35c37-kube-api-access-wc5mh\") pod \"collect-profiles-29460885-kc52x\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.398094 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340556b0-d1f9-4e04-bd8e-cd2973e35c37-config-volume\") pod \"collect-profiles-29460885-kc52x\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.398199 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/340556b0-d1f9-4e04-bd8e-cd2973e35c37-secret-volume\") pod \"collect-profiles-29460885-kc52x\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.400204 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340556b0-d1f9-4e04-bd8e-cd2973e35c37-config-volume\") pod \"collect-profiles-29460885-kc52x\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.414246 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/340556b0-d1f9-4e04-bd8e-cd2973e35c37-secret-volume\") pod \"collect-profiles-29460885-kc52x\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.421302 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc5mh\" (UniqueName: \"kubernetes.io/projected/340556b0-d1f9-4e04-bd8e-cd2973e35c37-kube-api-access-wc5mh\") pod \"collect-profiles-29460885-kc52x\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.521837 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:00 crc kubenswrapper[5034]: I0105 22:45:00.984446 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x"] Jan 05 22:45:01 crc kubenswrapper[5034]: I0105 22:45:01.084019 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" event={"ID":"340556b0-d1f9-4e04-bd8e-cd2973e35c37","Type":"ContainerStarted","Data":"f1abc933ad7c5ae9e63b4c163f5fd4c315b6895490bc78553873f4c1bcac4a05"} Jan 05 22:45:02 crc kubenswrapper[5034]: I0105 22:45:02.117686 5034 generic.go:334] "Generic (PLEG): container finished" podID="340556b0-d1f9-4e04-bd8e-cd2973e35c37" containerID="8a028b63bd5bc008054daa4235142157039813163f46f1c5e5f864c247a810f5" exitCode=0 Jan 05 22:45:02 crc kubenswrapper[5034]: I0105 22:45:02.117770 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" event={"ID":"340556b0-d1f9-4e04-bd8e-cd2973e35c37","Type":"ContainerDied","Data":"8a028b63bd5bc008054daa4235142157039813163f46f1c5e5f864c247a810f5"} Jan 05 22:45:03 crc kubenswrapper[5034]: I0105 22:45:03.467600 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:03 crc kubenswrapper[5034]: I0105 22:45:03.654218 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/340556b0-d1f9-4e04-bd8e-cd2973e35c37-secret-volume\") pod \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " Jan 05 22:45:03 crc kubenswrapper[5034]: I0105 22:45:03.654476 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340556b0-d1f9-4e04-bd8e-cd2973e35c37-config-volume\") pod \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " Jan 05 22:45:03 crc kubenswrapper[5034]: I0105 22:45:03.654633 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc5mh\" (UniqueName: \"kubernetes.io/projected/340556b0-d1f9-4e04-bd8e-cd2973e35c37-kube-api-access-wc5mh\") pod \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\" (UID: \"340556b0-d1f9-4e04-bd8e-cd2973e35c37\") " Jan 05 22:45:03 crc kubenswrapper[5034]: I0105 22:45:03.655631 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/340556b0-d1f9-4e04-bd8e-cd2973e35c37-config-volume" (OuterVolumeSpecName: "config-volume") pod "340556b0-d1f9-4e04-bd8e-cd2973e35c37" (UID: "340556b0-d1f9-4e04-bd8e-cd2973e35c37"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:45:03 crc kubenswrapper[5034]: I0105 22:45:03.664662 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/340556b0-d1f9-4e04-bd8e-cd2973e35c37-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "340556b0-d1f9-4e04-bd8e-cd2973e35c37" (UID: "340556b0-d1f9-4e04-bd8e-cd2973e35c37"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:45:03 crc kubenswrapper[5034]: I0105 22:45:03.665497 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340556b0-d1f9-4e04-bd8e-cd2973e35c37-kube-api-access-wc5mh" (OuterVolumeSpecName: "kube-api-access-wc5mh") pod "340556b0-d1f9-4e04-bd8e-cd2973e35c37" (UID: "340556b0-d1f9-4e04-bd8e-cd2973e35c37"). InnerVolumeSpecName "kube-api-access-wc5mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:45:03 crc kubenswrapper[5034]: I0105 22:45:03.757008 5034 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/340556b0-d1f9-4e04-bd8e-cd2973e35c37-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:45:03 crc kubenswrapper[5034]: I0105 22:45:03.757061 5034 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340556b0-d1f9-4e04-bd8e-cd2973e35c37-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:45:03 crc kubenswrapper[5034]: I0105 22:45:03.757157 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc5mh\" (UniqueName: \"kubernetes.io/projected/340556b0-d1f9-4e04-bd8e-cd2973e35c37-kube-api-access-wc5mh\") on node \"crc\" DevicePath \"\"" Jan 05 22:45:04 crc kubenswrapper[5034]: I0105 22:45:04.142580 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" event={"ID":"340556b0-d1f9-4e04-bd8e-cd2973e35c37","Type":"ContainerDied","Data":"f1abc933ad7c5ae9e63b4c163f5fd4c315b6895490bc78553873f4c1bcac4a05"} Jan 05 22:45:04 crc kubenswrapper[5034]: I0105 22:45:04.143235 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1abc933ad7c5ae9e63b4c163f5fd4c315b6895490bc78553873f4c1bcac4a05" Jan 05 22:45:04 crc kubenswrapper[5034]: I0105 22:45:04.142677 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x" Jan 05 22:45:04 crc kubenswrapper[5034]: I0105 22:45:04.589057 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl"] Jan 05 22:45:04 crc kubenswrapper[5034]: I0105 22:45:04.594270 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-mh2gl"] Jan 05 22:45:05 crc kubenswrapper[5034]: I0105 22:45:05.856130 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbd202ab-05ca-45d8-a0b5-bb0629ea5a75" path="/var/lib/kubelet/pods/fbd202ab-05ca-45d8-a0b5-bb0629ea5a75/volumes" Jan 05 22:45:57 crc kubenswrapper[5034]: I0105 22:45:57.524602 5034 scope.go:117] "RemoveContainer" containerID="752f4fb4a92ec2e4cdc6d8de06539066891d0741f2f2585f90bff243259a7f4b" Jan 05 22:47:04 crc kubenswrapper[5034]: E0105 22:47:04.985847 5034 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 05 22:47:20 crc kubenswrapper[5034]: I0105 22:47:20.468580 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:47:20 crc kubenswrapper[5034]: I0105 22:47:20.469018 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:47:50 crc kubenswrapper[5034]: I0105 22:47:50.469653 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:47:50 crc kubenswrapper[5034]: I0105 22:47:50.470636 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:48:20 crc kubenswrapper[5034]: I0105 22:48:20.469449 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:48:20 crc kubenswrapper[5034]: I0105 22:48:20.471336 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:48:20 crc kubenswrapper[5034]: I0105 22:48:20.471421 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:48:20 crc kubenswrapper[5034]: I0105 22:48:20.472425 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd85f882cd163104b4d356784fdce6e5f1141075a549f0132eb14e28e98a9c1b"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:48:20 crc kubenswrapper[5034]: I0105 22:48:20.472529 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://dd85f882cd163104b4d356784fdce6e5f1141075a549f0132eb14e28e98a9c1b" gracePeriod=600 Jan 05 22:48:21 crc kubenswrapper[5034]: I0105 22:48:21.114539 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="dd85f882cd163104b4d356784fdce6e5f1141075a549f0132eb14e28e98a9c1b" exitCode=0 Jan 05 22:48:21 crc kubenswrapper[5034]: I0105 22:48:21.114903 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"dd85f882cd163104b4d356784fdce6e5f1141075a549f0132eb14e28e98a9c1b"} Jan 05 22:48:21 crc kubenswrapper[5034]: I0105 22:48:21.114933 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253"} Jan 05 22:48:21 crc kubenswrapper[5034]: I0105 22:48:21.114951 5034 scope.go:117] "RemoveContainer" containerID="f6e1c127e3cc254bba19a0c0d7e8e79ff1f23abebb267c8081068b6f4546d84c" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.573116 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8vzp5"] Jan 05 22:48:33 crc kubenswrapper[5034]: E0105 22:48:33.574517 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340556b0-d1f9-4e04-bd8e-cd2973e35c37" containerName="collect-profiles" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.577772 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="340556b0-d1f9-4e04-bd8e-cd2973e35c37" containerName="collect-profiles" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.578361 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="340556b0-d1f9-4e04-bd8e-cd2973e35c37" containerName="collect-profiles" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.580030 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.584201 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vzp5"] Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.641797 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-catalog-content\") pod \"redhat-operators-8vzp5\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.641852 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82n6n\" (UniqueName: \"kubernetes.io/projected/4016423f-890d-4d69-8d70-e394cb5ec1f6-kube-api-access-82n6n\") pod \"redhat-operators-8vzp5\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.641951 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-utilities\") pod \"redhat-operators-8vzp5\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.742954 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-catalog-content\") pod \"redhat-operators-8vzp5\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.743004 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82n6n\" (UniqueName: \"kubernetes.io/projected/4016423f-890d-4d69-8d70-e394cb5ec1f6-kube-api-access-82n6n\") pod \"redhat-operators-8vzp5\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.743092 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-utilities\") pod \"redhat-operators-8vzp5\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.743580 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-catalog-content\") pod \"redhat-operators-8vzp5\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.743670 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-utilities\") pod \"redhat-operators-8vzp5\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.766067 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82n6n\" (UniqueName: \"kubernetes.io/projected/4016423f-890d-4d69-8d70-e394cb5ec1f6-kube-api-access-82n6n\") pod \"redhat-operators-8vzp5\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:33 crc kubenswrapper[5034]: I0105 22:48:33.905547 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:34 crc kubenswrapper[5034]: I0105 22:48:34.339475 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vzp5"] Jan 05 22:48:35 crc kubenswrapper[5034]: I0105 22:48:35.229889 5034 generic.go:334] "Generic (PLEG): container finished" podID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerID="b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368" exitCode=0 Jan 05 22:48:35 crc kubenswrapper[5034]: I0105 22:48:35.229977 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzp5" event={"ID":"4016423f-890d-4d69-8d70-e394cb5ec1f6","Type":"ContainerDied","Data":"b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368"} Jan 05 22:48:35 crc kubenswrapper[5034]: I0105 22:48:35.230228 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzp5" event={"ID":"4016423f-890d-4d69-8d70-e394cb5ec1f6","Type":"ContainerStarted","Data":"563a644845cb2ac9cadc7ccb5d50ec228abfab3c2af66fc3c2acd5f72ceb03da"} Jan 05 22:48:37 crc kubenswrapper[5034]: I0105 22:48:37.245483 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzp5" event={"ID":"4016423f-890d-4d69-8d70-e394cb5ec1f6","Type":"ContainerStarted","Data":"e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060"} Jan 05 22:48:38 crc kubenswrapper[5034]: I0105 22:48:38.258061 5034 generic.go:334] "Generic (PLEG): container finished" podID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerID="e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060" exitCode=0 Jan 05 22:48:38 crc kubenswrapper[5034]: I0105 22:48:38.258282 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzp5" event={"ID":"4016423f-890d-4d69-8d70-e394cb5ec1f6","Type":"ContainerDied","Data":"e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060"} Jan 05 22:48:39 crc kubenswrapper[5034]: I0105 22:48:39.270647 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzp5" event={"ID":"4016423f-890d-4d69-8d70-e394cb5ec1f6","Type":"ContainerStarted","Data":"7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be"} Jan 05 22:48:39 crc kubenswrapper[5034]: I0105 22:48:39.298928 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8vzp5" podStartSLOduration=2.8131297379999998 podStartE2EDuration="6.298893828s" podCreationTimestamp="2026-01-05 22:48:33 +0000 UTC" firstStartedPulling="2026-01-05 22:48:35.231677106 +0000 UTC m=+3407.603676545" lastFinishedPulling="2026-01-05 22:48:38.717441196 +0000 UTC m=+3411.089440635" observedRunningTime="2026-01-05 22:48:39.29330169 +0000 UTC m=+3411.665301229" watchObservedRunningTime="2026-01-05 22:48:39.298893828 +0000 UTC m=+3411.670893267" Jan 05 22:48:43 crc kubenswrapper[5034]: I0105 22:48:43.906805 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:43 crc kubenswrapper[5034]: I0105 22:48:43.907191 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:44 crc kubenswrapper[5034]: I0105 22:48:44.952212 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8vzp5" podUID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerName="registry-server" probeResult="failure" output=< Jan 05 22:48:44 crc kubenswrapper[5034]: timeout: failed to connect service ":50051" within 1s Jan 05 22:48:44 crc kubenswrapper[5034]: > Jan 05 22:48:53 crc kubenswrapper[5034]: I0105 22:48:53.957070 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:54 crc kubenswrapper[5034]: I0105 22:48:54.021576 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:54 crc kubenswrapper[5034]: I0105 22:48:54.202409 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vzp5"] Jan 05 22:48:55 crc kubenswrapper[5034]: I0105 22:48:55.396773 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8vzp5" podUID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerName="registry-server" containerID="cri-o://7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be" gracePeriod=2 Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.205842 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.331662 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-utilities\") pod \"4016423f-890d-4d69-8d70-e394cb5ec1f6\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.331851 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-catalog-content\") pod \"4016423f-890d-4d69-8d70-e394cb5ec1f6\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.331898 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82n6n\" (UniqueName: \"kubernetes.io/projected/4016423f-890d-4d69-8d70-e394cb5ec1f6-kube-api-access-82n6n\") pod \"4016423f-890d-4d69-8d70-e394cb5ec1f6\" (UID: \"4016423f-890d-4d69-8d70-e394cb5ec1f6\") " Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.336357 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-utilities" (OuterVolumeSpecName: "utilities") pod "4016423f-890d-4d69-8d70-e394cb5ec1f6" (UID: "4016423f-890d-4d69-8d70-e394cb5ec1f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.339871 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4016423f-890d-4d69-8d70-e394cb5ec1f6-kube-api-access-82n6n" (OuterVolumeSpecName: "kube-api-access-82n6n") pod "4016423f-890d-4d69-8d70-e394cb5ec1f6" (UID: "4016423f-890d-4d69-8d70-e394cb5ec1f6"). InnerVolumeSpecName "kube-api-access-82n6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.408343 5034 generic.go:334] "Generic (PLEG): container finished" podID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerID="7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be" exitCode=0 Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.408410 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzp5" event={"ID":"4016423f-890d-4d69-8d70-e394cb5ec1f6","Type":"ContainerDied","Data":"7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be"} Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.408458 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzp5" event={"ID":"4016423f-890d-4d69-8d70-e394cb5ec1f6","Type":"ContainerDied","Data":"563a644845cb2ac9cadc7ccb5d50ec228abfab3c2af66fc3c2acd5f72ceb03da"} Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.408501 5034 scope.go:117] "RemoveContainer" containerID="7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.415845 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vzp5" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.432525 5034 scope.go:117] "RemoveContainer" containerID="e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.433522 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82n6n\" (UniqueName: \"kubernetes.io/projected/4016423f-890d-4d69-8d70-e394cb5ec1f6-kube-api-access-82n6n\") on node \"crc\" DevicePath \"\"" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.433555 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.453578 5034 scope.go:117] "RemoveContainer" containerID="b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.480287 5034 scope.go:117] "RemoveContainer" containerID="7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be" Jan 05 22:48:56 crc kubenswrapper[5034]: E0105 22:48:56.480803 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be\": container with ID starting with 7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be not found: ID does not exist" containerID="7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.480859 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be"} err="failed to get container status \"7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be\": rpc error: code = NotFound desc = could not find container \"7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be\": container with ID starting with 7927f728367b842fd092657405c1ff9dd2b24f99f4dc561a7bbebfdfcb2908be not found: ID does not exist" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.480889 5034 scope.go:117] "RemoveContainer" containerID="e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060" Jan 05 22:48:56 crc kubenswrapper[5034]: E0105 22:48:56.481320 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060\": container with ID starting with e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060 not found: ID does not exist" containerID="e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.481352 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060"} err="failed to get container status \"e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060\": rpc error: code = NotFound desc = could not find container \"e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060\": container with ID starting with e56bb6b3b0619596a2bebfc0b049db7f2cf144ee3713255b690821aabd44a060 not found: ID does not exist" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.481383 5034 scope.go:117] "RemoveContainer" containerID="b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368" Jan 05 22:48:56 crc kubenswrapper[5034]: E0105 22:48:56.481681 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368\": container with ID starting with b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368 not found: ID does not exist" containerID="b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.481703 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368"} err="failed to get container status \"b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368\": rpc error: code = NotFound desc = could not find container \"b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368\": container with ID starting with b6878339b0dd46e7cc5a3107dd042d0783de38181e514815a2a717faa05aa368 not found: ID does not exist" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.522862 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4016423f-890d-4d69-8d70-e394cb5ec1f6" (UID: "4016423f-890d-4d69-8d70-e394cb5ec1f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.535374 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4016423f-890d-4d69-8d70-e394cb5ec1f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.754251 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vzp5"] Jan 05 22:48:56 crc kubenswrapper[5034]: I0105 22:48:56.759644 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8vzp5"] Jan 05 22:48:57 crc kubenswrapper[5034]: I0105 22:48:57.848988 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4016423f-890d-4d69-8d70-e394cb5ec1f6" path="/var/lib/kubelet/pods/4016423f-890d-4d69-8d70-e394cb5ec1f6/volumes" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.726325 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvfwh"] Jan 05 22:49:21 crc kubenswrapper[5034]: E0105 22:49:21.727621 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerName="registry-server" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.727639 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerName="registry-server" Jan 05 22:49:21 crc kubenswrapper[5034]: E0105 22:49:21.727676 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerName="extract-content" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.727682 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerName="extract-content" Jan 05 22:49:21 crc kubenswrapper[5034]: E0105 22:49:21.727694 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerName="extract-utilities" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.727708 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerName="extract-utilities" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.727851 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4016423f-890d-4d69-8d70-e394cb5ec1f6" containerName="registry-server" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.729056 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.739838 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvfwh"] Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.859919 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-catalog-content\") pod \"certified-operators-rvfwh\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.859977 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfnj\" (UniqueName: \"kubernetes.io/projected/dc9d1390-4984-47b4-a2f6-84830126dcff-kube-api-access-jdfnj\") pod \"certified-operators-rvfwh\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.860106 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-utilities\") pod \"certified-operators-rvfwh\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.961648 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-utilities\") pod \"certified-operators-rvfwh\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.961764 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-catalog-content\") pod \"certified-operators-rvfwh\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.961793 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfnj\" (UniqueName: \"kubernetes.io/projected/dc9d1390-4984-47b4-a2f6-84830126dcff-kube-api-access-jdfnj\") pod \"certified-operators-rvfwh\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.963164 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-catalog-content\") pod \"certified-operators-rvfwh\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:21 crc kubenswrapper[5034]: I0105 22:49:21.963175 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-utilities\") pod \"certified-operators-rvfwh\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:22 crc kubenswrapper[5034]: I0105 22:49:22.002658 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfnj\" (UniqueName: \"kubernetes.io/projected/dc9d1390-4984-47b4-a2f6-84830126dcff-kube-api-access-jdfnj\") pod \"certified-operators-rvfwh\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:22 crc kubenswrapper[5034]: I0105 22:49:22.049104 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:22 crc kubenswrapper[5034]: I0105 22:49:22.329338 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvfwh"] Jan 05 22:49:22 crc kubenswrapper[5034]: I0105 22:49:22.616405 5034 generic.go:334] "Generic (PLEG): container finished" podID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerID="08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275" exitCode=0 Jan 05 22:49:22 crc kubenswrapper[5034]: I0105 22:49:22.616504 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvfwh" event={"ID":"dc9d1390-4984-47b4-a2f6-84830126dcff","Type":"ContainerDied","Data":"08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275"} Jan 05 22:49:22 crc kubenswrapper[5034]: I0105 22:49:22.616860 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvfwh" event={"ID":"dc9d1390-4984-47b4-a2f6-84830126dcff","Type":"ContainerStarted","Data":"c4cf9409438f36773110d0bdfd86d5f922bc4bb4785105db3b0200c1d1a09bb9"} Jan 05 22:49:24 crc kubenswrapper[5034]: I0105 22:49:24.638735 5034 generic.go:334] "Generic (PLEG): container finished" podID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerID="33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20" exitCode=0 Jan 05 22:49:24 crc kubenswrapper[5034]: I0105 22:49:24.638862 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvfwh" event={"ID":"dc9d1390-4984-47b4-a2f6-84830126dcff","Type":"ContainerDied","Data":"33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20"} Jan 05 22:49:25 crc kubenswrapper[5034]: I0105 22:49:25.650370 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvfwh" event={"ID":"dc9d1390-4984-47b4-a2f6-84830126dcff","Type":"ContainerStarted","Data":"488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380"} Jan 05 22:49:25 crc kubenswrapper[5034]: I0105 22:49:25.673857 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvfwh" podStartSLOduration=2.212478356 podStartE2EDuration="4.673828352s" podCreationTimestamp="2026-01-05 22:49:21 +0000 UTC" firstStartedPulling="2026-01-05 22:49:22.618152559 +0000 UTC m=+3454.990151988" lastFinishedPulling="2026-01-05 22:49:25.079502535 +0000 UTC m=+3457.451501984" observedRunningTime="2026-01-05 22:49:25.67339619 +0000 UTC m=+3458.045395649" watchObservedRunningTime="2026-01-05 22:49:25.673828352 +0000 UTC m=+3458.045827791" Jan 05 22:49:32 crc kubenswrapper[5034]: I0105 22:49:32.049916 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:32 crc kubenswrapper[5034]: I0105 22:49:32.051171 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:32 crc kubenswrapper[5034]: I0105 22:49:32.088140 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:32 crc kubenswrapper[5034]: I0105 22:49:32.753341 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:32 crc kubenswrapper[5034]: I0105 22:49:32.803124 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvfwh"] Jan 05 22:49:34 crc kubenswrapper[5034]: I0105 22:49:34.733587 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rvfwh" podUID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerName="registry-server" containerID="cri-o://488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380" gracePeriod=2 Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.731768 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.744689 5034 generic.go:334] "Generic (PLEG): container finished" podID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerID="488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380" exitCode=0 Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.744735 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvfwh" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.744768 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvfwh" event={"ID":"dc9d1390-4984-47b4-a2f6-84830126dcff","Type":"ContainerDied","Data":"488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380"} Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.744841 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvfwh" event={"ID":"dc9d1390-4984-47b4-a2f6-84830126dcff","Type":"ContainerDied","Data":"c4cf9409438f36773110d0bdfd86d5f922bc4bb4785105db3b0200c1d1a09bb9"} Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.744864 5034 scope.go:117] "RemoveContainer" containerID="488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.781509 5034 scope.go:117] "RemoveContainer" containerID="33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.804804 5034 scope.go:117] "RemoveContainer" containerID="08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.831830 5034 scope.go:117] "RemoveContainer" containerID="488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380" Jan 05 22:49:35 crc kubenswrapper[5034]: E0105 22:49:35.832403 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380\": container with ID starting with 488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380 not found: ID does not exist" containerID="488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.832450 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380"} err="failed to get container status \"488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380\": rpc error: code = NotFound desc = could not find container \"488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380\": container with ID starting with 488850d666a09247d3ae617b175509b9fb69c17e369789cb083b837a9b1e3380 not found: ID does not exist" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.832479 5034 scope.go:117] "RemoveContainer" containerID="33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20" Jan 05 22:49:35 crc kubenswrapper[5034]: E0105 22:49:35.832917 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20\": container with ID starting with 33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20 not found: ID does not exist" containerID="33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.832936 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20"} err="failed to get container status \"33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20\": rpc error: code = NotFound desc = could not find container \"33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20\": container with ID starting with 33b93963d42063225669f0a106b0841b69d09a88f376354a2de4939aeac4ea20 not found: ID does not exist" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.832950 5034 scope.go:117] "RemoveContainer" containerID="08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275" Jan 05 22:49:35 crc kubenswrapper[5034]: E0105 22:49:35.833190 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275\": container with ID starting with 08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275 not found: ID does not exist" containerID="08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.833209 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275"} err="failed to get container status \"08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275\": rpc error: code = NotFound desc = could not find container \"08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275\": container with ID starting with 08108bf953f6de68daf7b12058f4dc4072833ef90e730c6b4b53614ebb8ea275 not found: ID does not exist" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.895535 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdfnj\" (UniqueName: \"kubernetes.io/projected/dc9d1390-4984-47b4-a2f6-84830126dcff-kube-api-access-jdfnj\") pod \"dc9d1390-4984-47b4-a2f6-84830126dcff\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.895669 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-catalog-content\") pod \"dc9d1390-4984-47b4-a2f6-84830126dcff\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.895743 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-utilities\") pod \"dc9d1390-4984-47b4-a2f6-84830126dcff\" (UID: \"dc9d1390-4984-47b4-a2f6-84830126dcff\") " Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.897453 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-utilities" (OuterVolumeSpecName: "utilities") pod "dc9d1390-4984-47b4-a2f6-84830126dcff" (UID: "dc9d1390-4984-47b4-a2f6-84830126dcff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.902209 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9d1390-4984-47b4-a2f6-84830126dcff-kube-api-access-jdfnj" (OuterVolumeSpecName: "kube-api-access-jdfnj") pod "dc9d1390-4984-47b4-a2f6-84830126dcff" (UID: "dc9d1390-4984-47b4-a2f6-84830126dcff"). InnerVolumeSpecName "kube-api-access-jdfnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.956871 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc9d1390-4984-47b4-a2f6-84830126dcff" (UID: "dc9d1390-4984-47b4-a2f6-84830126dcff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.998074 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.998130 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdfnj\" (UniqueName: \"kubernetes.io/projected/dc9d1390-4984-47b4-a2f6-84830126dcff-kube-api-access-jdfnj\") on node \"crc\" DevicePath \"\"" Jan 05 22:49:35 crc kubenswrapper[5034]: I0105 22:49:35.998142 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9d1390-4984-47b4-a2f6-84830126dcff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:49:36 crc kubenswrapper[5034]: I0105 22:49:36.101886 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvfwh"] Jan 05 22:49:36 crc kubenswrapper[5034]: I0105 22:49:36.113298 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rvfwh"] Jan 05 22:49:37 crc kubenswrapper[5034]: I0105 22:49:37.851539 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9d1390-4984-47b4-a2f6-84830126dcff" path="/var/lib/kubelet/pods/dc9d1390-4984-47b4-a2f6-84830126dcff/volumes" Jan 05 22:50:20 crc kubenswrapper[5034]: I0105 22:50:20.469385 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:50:20 crc kubenswrapper[5034]: I0105 22:50:20.470320 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:50:50 crc kubenswrapper[5034]: I0105 22:50:50.469054 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:50:50 crc kubenswrapper[5034]: I0105 22:50:50.469697 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:51:20 crc kubenswrapper[5034]: I0105 22:51:20.468752 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:51:20 crc kubenswrapper[5034]: I0105 22:51:20.469388 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:51:20 crc kubenswrapper[5034]: I0105 22:51:20.469451 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:51:20 crc kubenswrapper[5034]: I0105 22:51:20.470312 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:51:20 crc kubenswrapper[5034]: I0105 22:51:20.470382 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" gracePeriod=600 Jan 05 22:51:20 crc kubenswrapper[5034]: E0105 22:51:20.596961 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:51:20 crc kubenswrapper[5034]: I0105 22:51:20.722884 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" exitCode=0 Jan 05 22:51:20 crc kubenswrapper[5034]: I0105 22:51:20.722952 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253"} Jan 05 22:51:20 crc kubenswrapper[5034]: I0105 22:51:20.723001 5034 scope.go:117] "RemoveContainer" containerID="dd85f882cd163104b4d356784fdce6e5f1141075a549f0132eb14e28e98a9c1b" Jan 05 22:51:20 crc kubenswrapper[5034]: I0105 22:51:20.724245 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:51:20 crc kubenswrapper[5034]: E0105 22:51:20.724453 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:51:32 crc kubenswrapper[5034]: I0105 22:51:32.839563 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:51:32 crc kubenswrapper[5034]: E0105 22:51:32.840869 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:51:44 crc kubenswrapper[5034]: I0105 22:51:44.839384 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:51:44 crc kubenswrapper[5034]: E0105 22:51:44.840987 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:51:56 crc kubenswrapper[5034]: I0105 22:51:56.839521 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:51:56 crc kubenswrapper[5034]: E0105 22:51:56.841048 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:52:08 crc kubenswrapper[5034]: I0105 22:52:08.839050 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:52:08 crc kubenswrapper[5034]: E0105 22:52:08.839845 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:52:22 crc kubenswrapper[5034]: I0105 22:52:22.839368 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:52:22 crc kubenswrapper[5034]: E0105 22:52:22.840709 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:52:36 crc kubenswrapper[5034]: I0105 22:52:36.838927 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:52:36 crc kubenswrapper[5034]: E0105 22:52:36.839661 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:52:50 crc kubenswrapper[5034]: I0105 22:52:50.838879 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:52:50 crc kubenswrapper[5034]: E0105 22:52:50.840147 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:53:02 crc kubenswrapper[5034]: I0105 22:53:02.839822 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:53:02 crc kubenswrapper[5034]: E0105 22:53:02.841221 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:53:15 crc kubenswrapper[5034]: I0105 22:53:15.838362 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:53:15 crc kubenswrapper[5034]: E0105 22:53:15.839065 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:53:29 crc kubenswrapper[5034]: I0105 22:53:29.838891 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:53:29 crc kubenswrapper[5034]: E0105 22:53:29.839725 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:53:40 crc kubenswrapper[5034]: I0105 22:53:40.839162 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:53:40 crc kubenswrapper[5034]: E0105 22:53:40.839911 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:53:54 crc kubenswrapper[5034]: I0105 22:53:54.838455 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:53:54 crc kubenswrapper[5034]: E0105 22:53:54.839629 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:54:09 crc kubenswrapper[5034]: I0105 22:54:09.838840 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:54:09 crc kubenswrapper[5034]: E0105 22:54:09.839965 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:54:21 crc kubenswrapper[5034]: I0105 22:54:21.839293 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:54:21 crc kubenswrapper[5034]: E0105 22:54:21.840134 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:54:36 crc kubenswrapper[5034]: I0105 22:54:36.839030 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:54:36 crc kubenswrapper[5034]: E0105 22:54:36.840343 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:54:47 crc kubenswrapper[5034]: I0105 22:54:47.838232 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:54:47 crc kubenswrapper[5034]: E0105 22:54:47.839217 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:54:59 crc kubenswrapper[5034]: I0105 22:54:59.838855 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:54:59 crc kubenswrapper[5034]: E0105 22:54:59.839725 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.041823 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vzhlp"] Jan 05 22:55:06 crc kubenswrapper[5034]: E0105 22:55:06.042740 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerName="extract-content" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.042754 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerName="extract-content" Jan 05 22:55:06 crc kubenswrapper[5034]: E0105 22:55:06.042766 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerName="registry-server" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.042772 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerName="registry-server" Jan 05 22:55:06 crc kubenswrapper[5034]: E0105 22:55:06.042791 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerName="extract-utilities" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.042798 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerName="extract-utilities" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.042929 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9d1390-4984-47b4-a2f6-84830126dcff" containerName="registry-server" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.044030 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.060756 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vzhlp"] Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.110485 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-catalog-content\") pod \"community-operators-vzhlp\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.110556 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-utilities\") pod \"community-operators-vzhlp\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.110596 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scssx\" (UniqueName: \"kubernetes.io/projected/8581708a-f650-49a9-800e-04f6c48fc8c1-kube-api-access-scssx\") pod \"community-operators-vzhlp\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.211841 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scssx\" (UniqueName: \"kubernetes.io/projected/8581708a-f650-49a9-800e-04f6c48fc8c1-kube-api-access-scssx\") pod \"community-operators-vzhlp\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.211977 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-catalog-content\") pod \"community-operators-vzhlp\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.212033 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-utilities\") pod \"community-operators-vzhlp\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.212558 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-catalog-content\") pod \"community-operators-vzhlp\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.212648 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-utilities\") pod \"community-operators-vzhlp\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.236293 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scssx\" (UniqueName: \"kubernetes.io/projected/8581708a-f650-49a9-800e-04f6c48fc8c1-kube-api-access-scssx\") pod \"community-operators-vzhlp\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.370115 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:06 crc kubenswrapper[5034]: I0105 22:55:06.889210 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vzhlp"] Jan 05 22:55:07 crc kubenswrapper[5034]: I0105 22:55:07.672538 5034 generic.go:334] "Generic (PLEG): container finished" podID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerID="af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6" exitCode=0 Jan 05 22:55:07 crc kubenswrapper[5034]: I0105 22:55:07.672594 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzhlp" event={"ID":"8581708a-f650-49a9-800e-04f6c48fc8c1","Type":"ContainerDied","Data":"af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6"} Jan 05 22:55:07 crc kubenswrapper[5034]: I0105 22:55:07.672873 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzhlp" event={"ID":"8581708a-f650-49a9-800e-04f6c48fc8c1","Type":"ContainerStarted","Data":"8c9f7edce8b0fb280071e02c930cf67942d0e2e51950694f0cb86bd2d7ae39e0"} Jan 05 22:55:07 crc kubenswrapper[5034]: I0105 22:55:07.674745 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:55:08 crc kubenswrapper[5034]: I0105 22:55:08.682557 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzhlp" event={"ID":"8581708a-f650-49a9-800e-04f6c48fc8c1","Type":"ContainerStarted","Data":"3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63"} Jan 05 22:55:09 crc kubenswrapper[5034]: I0105 22:55:09.693267 5034 generic.go:334] "Generic (PLEG): container finished" podID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerID="3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63" exitCode=0 Jan 05 22:55:09 crc kubenswrapper[5034]: I0105 22:55:09.693350 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzhlp" event={"ID":"8581708a-f650-49a9-800e-04f6c48fc8c1","Type":"ContainerDied","Data":"3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63"} Jan 05 22:55:10 crc kubenswrapper[5034]: I0105 22:55:10.701741 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzhlp" event={"ID":"8581708a-f650-49a9-800e-04f6c48fc8c1","Type":"ContainerStarted","Data":"16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7"} Jan 05 22:55:10 crc kubenswrapper[5034]: I0105 22:55:10.723174 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vzhlp" podStartSLOduration=2.303696289 podStartE2EDuration="4.723150819s" podCreationTimestamp="2026-01-05 22:55:06 +0000 UTC" firstStartedPulling="2026-01-05 22:55:07.674440573 +0000 UTC m=+3800.046440012" lastFinishedPulling="2026-01-05 22:55:10.093895103 +0000 UTC m=+3802.465894542" observedRunningTime="2026-01-05 22:55:10.722987484 +0000 UTC m=+3803.094986923" watchObservedRunningTime="2026-01-05 22:55:10.723150819 +0000 UTC m=+3803.095150268" Jan 05 22:55:11 crc kubenswrapper[5034]: I0105 22:55:11.838764 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:55:11 crc kubenswrapper[5034]: E0105 22:55:11.839326 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:55:16 crc kubenswrapper[5034]: I0105 22:55:16.370545 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:16 crc kubenswrapper[5034]: I0105 22:55:16.371181 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:16 crc kubenswrapper[5034]: I0105 22:55:16.429900 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:16 crc kubenswrapper[5034]: I0105 22:55:16.826028 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:16 crc kubenswrapper[5034]: I0105 22:55:16.878259 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vzhlp"] Jan 05 22:55:18 crc kubenswrapper[5034]: I0105 22:55:18.765807 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vzhlp" podUID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerName="registry-server" containerID="cri-o://16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7" gracePeriod=2 Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.160802 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.302016 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-catalog-content\") pod \"8581708a-f650-49a9-800e-04f6c48fc8c1\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.302220 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-utilities\") pod \"8581708a-f650-49a9-800e-04f6c48fc8c1\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.302481 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scssx\" (UniqueName: \"kubernetes.io/projected/8581708a-f650-49a9-800e-04f6c48fc8c1-kube-api-access-scssx\") pod \"8581708a-f650-49a9-800e-04f6c48fc8c1\" (UID: \"8581708a-f650-49a9-800e-04f6c48fc8c1\") " Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.303292 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-utilities" (OuterVolumeSpecName: "utilities") pod "8581708a-f650-49a9-800e-04f6c48fc8c1" (UID: "8581708a-f650-49a9-800e-04f6c48fc8c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.316707 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8581708a-f650-49a9-800e-04f6c48fc8c1-kube-api-access-scssx" (OuterVolumeSpecName: "kube-api-access-scssx") pod "8581708a-f650-49a9-800e-04f6c48fc8c1" (UID: "8581708a-f650-49a9-800e-04f6c48fc8c1"). InnerVolumeSpecName "kube-api-access-scssx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.395027 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8581708a-f650-49a9-800e-04f6c48fc8c1" (UID: "8581708a-f650-49a9-800e-04f6c48fc8c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.403993 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scssx\" (UniqueName: \"kubernetes.io/projected/8581708a-f650-49a9-800e-04f6c48fc8c1-kube-api-access-scssx\") on node \"crc\" DevicePath \"\"" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.404016 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.404027 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8581708a-f650-49a9-800e-04f6c48fc8c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.774065 5034 generic.go:334] "Generic (PLEG): container finished" podID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerID="16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7" exitCode=0 Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.774271 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzhlp" event={"ID":"8581708a-f650-49a9-800e-04f6c48fc8c1","Type":"ContainerDied","Data":"16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7"} Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.774454 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzhlp" event={"ID":"8581708a-f650-49a9-800e-04f6c48fc8c1","Type":"ContainerDied","Data":"8c9f7edce8b0fb280071e02c930cf67942d0e2e51950694f0cb86bd2d7ae39e0"} Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.774479 5034 scope.go:117] "RemoveContainer" containerID="16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.774345 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzhlp" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.801499 5034 scope.go:117] "RemoveContainer" containerID="3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.806092 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vzhlp"] Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.818970 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vzhlp"] Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.841628 5034 scope.go:117] "RemoveContainer" containerID="af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.848419 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8581708a-f650-49a9-800e-04f6c48fc8c1" path="/var/lib/kubelet/pods/8581708a-f650-49a9-800e-04f6c48fc8c1/volumes" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.859317 5034 scope.go:117] "RemoveContainer" containerID="16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7" Jan 05 22:55:19 crc kubenswrapper[5034]: E0105 22:55:19.859763 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7\": container with ID starting with 16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7 not found: ID does not exist" containerID="16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.859905 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7"} err="failed to get container status \"16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7\": rpc error: code = NotFound desc = could not find container \"16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7\": container with ID starting with 16fa331af3ca70d6631893dd563dd4a1f654753e5fddfd6f9d1449885e7b1fd7 not found: ID does not exist" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.859994 5034 scope.go:117] "RemoveContainer" containerID="3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63" Jan 05 22:55:19 crc kubenswrapper[5034]: E0105 22:55:19.860450 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63\": container with ID starting with 3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63 not found: ID does not exist" containerID="3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.860481 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63"} err="failed to get container status \"3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63\": rpc error: code = NotFound desc = could not find container \"3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63\": container with ID starting with 3d20d7746910cf69fbe6d02785e4080661c5df7709a122409341bf222e3e9d63 not found: ID does not exist" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.860526 5034 scope.go:117] "RemoveContainer" containerID="af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6" Jan 05 22:55:19 crc kubenswrapper[5034]: E0105 22:55:19.861088 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6\": container with ID starting with af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6 not found: ID does not exist" containerID="af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6" Jan 05 22:55:19 crc kubenswrapper[5034]: I0105 22:55:19.861128 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6"} err="failed to get container status \"af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6\": rpc error: code = NotFound desc = could not find container \"af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6\": container with ID starting with af19663c1bb4009661d191c75fa3e8eada6649ec3cea64705c1e6fa466bdcfc6 not found: ID does not exist" Jan 05 22:55:24 crc kubenswrapper[5034]: I0105 22:55:24.838232 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:55:24 crc kubenswrapper[5034]: E0105 22:55:24.839097 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:55:35 crc kubenswrapper[5034]: I0105 22:55:35.872601 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:55:35 crc kubenswrapper[5034]: E0105 22:55:35.873646 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:55:48 crc kubenswrapper[5034]: I0105 22:55:48.840006 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:55:48 crc kubenswrapper[5034]: E0105 22:55:48.841339 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:56:01 crc kubenswrapper[5034]: I0105 22:56:01.838940 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:56:01 crc kubenswrapper[5034]: E0105 22:56:01.839678 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:56:16 crc kubenswrapper[5034]: I0105 22:56:16.839814 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:56:16 crc kubenswrapper[5034]: E0105 22:56:16.841605 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 22:56:30 crc kubenswrapper[5034]: I0105 22:56:30.839298 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:56:31 crc kubenswrapper[5034]: I0105 22:56:31.424825 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"ec39bd2178547c3ba3d206907fd576195b54616623919092b2fe0f1496523ec9"} Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.230703 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-87lxp"] Jan 05 22:58:38 crc kubenswrapper[5034]: E0105 22:58:38.231741 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerName="registry-server" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.231758 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerName="registry-server" Jan 05 22:58:38 crc kubenswrapper[5034]: E0105 22:58:38.231769 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerName="extract-utilities" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.231776 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerName="extract-utilities" Jan 05 22:58:38 crc kubenswrapper[5034]: E0105 22:58:38.231798 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerName="extract-content" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.231807 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerName="extract-content" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.232008 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8581708a-f650-49a9-800e-04f6c48fc8c1" containerName="registry-server" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.233400 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.251384 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87lxp"] Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.256826 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ksr6\" (UniqueName: \"kubernetes.io/projected/49de94bf-eff6-4bcc-8c0f-dcff94b40081-kube-api-access-4ksr6\") pod \"redhat-operators-87lxp\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.256973 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-catalog-content\") pod \"redhat-operators-87lxp\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.257160 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-utilities\") pod \"redhat-operators-87lxp\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.358799 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-catalog-content\") pod \"redhat-operators-87lxp\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.358885 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-utilities\") pod \"redhat-operators-87lxp\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.358980 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ksr6\" (UniqueName: \"kubernetes.io/projected/49de94bf-eff6-4bcc-8c0f-dcff94b40081-kube-api-access-4ksr6\") pod \"redhat-operators-87lxp\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.360552 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-catalog-content\") pod \"redhat-operators-87lxp\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.361014 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-utilities\") pod \"redhat-operators-87lxp\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.381933 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ksr6\" (UniqueName: \"kubernetes.io/projected/49de94bf-eff6-4bcc-8c0f-dcff94b40081-kube-api-access-4ksr6\") pod \"redhat-operators-87lxp\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:38 crc kubenswrapper[5034]: I0105 22:58:38.557012 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:39 crc kubenswrapper[5034]: I0105 22:58:39.022892 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87lxp"] Jan 05 22:58:39 crc kubenswrapper[5034]: I0105 22:58:39.477192 5034 generic.go:334] "Generic (PLEG): container finished" podID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerID="bd744525ab45ce8c6c01bfbaa24eb00de0eeebd11a1d9e5a7d231529dc5e52cb" exitCode=0 Jan 05 22:58:39 crc kubenswrapper[5034]: I0105 22:58:39.477240 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87lxp" event={"ID":"49de94bf-eff6-4bcc-8c0f-dcff94b40081","Type":"ContainerDied","Data":"bd744525ab45ce8c6c01bfbaa24eb00de0eeebd11a1d9e5a7d231529dc5e52cb"} Jan 05 22:58:39 crc kubenswrapper[5034]: I0105 22:58:39.477292 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87lxp" event={"ID":"49de94bf-eff6-4bcc-8c0f-dcff94b40081","Type":"ContainerStarted","Data":"2abdad2c98d1c3ed69e3325f90c119a23baed99f9da44467499acac98e5a1a9f"} Jan 05 22:58:41 crc kubenswrapper[5034]: I0105 22:58:41.501658 5034 generic.go:334] "Generic (PLEG): container finished" podID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerID="7fb114c8f41d15f3188750d886429947c4064002e447be1e069e1c84f1db6739" exitCode=0 Jan 05 22:58:41 crc kubenswrapper[5034]: I0105 22:58:41.501906 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87lxp" event={"ID":"49de94bf-eff6-4bcc-8c0f-dcff94b40081","Type":"ContainerDied","Data":"7fb114c8f41d15f3188750d886429947c4064002e447be1e069e1c84f1db6739"} Jan 05 22:58:42 crc kubenswrapper[5034]: I0105 22:58:42.513867 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87lxp" event={"ID":"49de94bf-eff6-4bcc-8c0f-dcff94b40081","Type":"ContainerStarted","Data":"d3a8e2e7a9fd35fb3169a9c63357378db22bdbb306b6571a57f60ee61f8ba0c8"} Jan 05 22:58:42 crc kubenswrapper[5034]: I0105 22:58:42.544492 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-87lxp" podStartSLOduration=2.129717075 podStartE2EDuration="4.544465887s" podCreationTimestamp="2026-01-05 22:58:38 +0000 UTC" firstStartedPulling="2026-01-05 22:58:39.479013615 +0000 UTC m=+4011.851013054" lastFinishedPulling="2026-01-05 22:58:41.893762407 +0000 UTC m=+4014.265761866" observedRunningTime="2026-01-05 22:58:42.535219924 +0000 UTC m=+4014.907219373" watchObservedRunningTime="2026-01-05 22:58:42.544465887 +0000 UTC m=+4014.916465326" Jan 05 22:58:48 crc kubenswrapper[5034]: I0105 22:58:48.557861 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:48 crc kubenswrapper[5034]: I0105 22:58:48.558436 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:48 crc kubenswrapper[5034]: I0105 22:58:48.602594 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:49 crc kubenswrapper[5034]: I0105 22:58:49.610483 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:49 crc kubenswrapper[5034]: I0105 22:58:49.664567 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87lxp"] Jan 05 22:58:50 crc kubenswrapper[5034]: I0105 22:58:50.469170 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:58:50 crc kubenswrapper[5034]: I0105 22:58:50.469236 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:58:51 crc kubenswrapper[5034]: I0105 22:58:51.591540 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-87lxp" podUID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerName="registry-server" containerID="cri-o://d3a8e2e7a9fd35fb3169a9c63357378db22bdbb306b6571a57f60ee61f8ba0c8" gracePeriod=2 Jan 05 22:58:53 crc kubenswrapper[5034]: I0105 22:58:53.608249 5034 generic.go:334] "Generic (PLEG): container finished" podID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerID="d3a8e2e7a9fd35fb3169a9c63357378db22bdbb306b6571a57f60ee61f8ba0c8" exitCode=0 Jan 05 22:58:53 crc kubenswrapper[5034]: I0105 22:58:53.608352 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87lxp" event={"ID":"49de94bf-eff6-4bcc-8c0f-dcff94b40081","Type":"ContainerDied","Data":"d3a8e2e7a9fd35fb3169a9c63357378db22bdbb306b6571a57f60ee61f8ba0c8"} Jan 05 22:58:53 crc kubenswrapper[5034]: I0105 22:58:53.881817 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:53 crc kubenswrapper[5034]: I0105 22:58:53.900970 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-catalog-content\") pod \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " Jan 05 22:58:53 crc kubenswrapper[5034]: I0105 22:58:53.901072 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ksr6\" (UniqueName: \"kubernetes.io/projected/49de94bf-eff6-4bcc-8c0f-dcff94b40081-kube-api-access-4ksr6\") pod \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " Jan 05 22:58:53 crc kubenswrapper[5034]: I0105 22:58:53.901172 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-utilities\") pod \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\" (UID: \"49de94bf-eff6-4bcc-8c0f-dcff94b40081\") " Jan 05 22:58:53 crc kubenswrapper[5034]: I0105 22:58:53.904275 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-utilities" (OuterVolumeSpecName: "utilities") pod "49de94bf-eff6-4bcc-8c0f-dcff94b40081" (UID: "49de94bf-eff6-4bcc-8c0f-dcff94b40081"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:58:53 crc kubenswrapper[5034]: I0105 22:58:53.907385 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49de94bf-eff6-4bcc-8c0f-dcff94b40081-kube-api-access-4ksr6" (OuterVolumeSpecName: "kube-api-access-4ksr6") pod "49de94bf-eff6-4bcc-8c0f-dcff94b40081" (UID: "49de94bf-eff6-4bcc-8c0f-dcff94b40081"). InnerVolumeSpecName "kube-api-access-4ksr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.002835 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ksr6\" (UniqueName: \"kubernetes.io/projected/49de94bf-eff6-4bcc-8c0f-dcff94b40081-kube-api-access-4ksr6\") on node \"crc\" DevicePath \"\"" Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.002868 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.036276 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49de94bf-eff6-4bcc-8c0f-dcff94b40081" (UID: "49de94bf-eff6-4bcc-8c0f-dcff94b40081"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.103707 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49de94bf-eff6-4bcc-8c0f-dcff94b40081-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.618201 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87lxp" event={"ID":"49de94bf-eff6-4bcc-8c0f-dcff94b40081","Type":"ContainerDied","Data":"2abdad2c98d1c3ed69e3325f90c119a23baed99f9da44467499acac98e5a1a9f"} Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.618288 5034 scope.go:117] "RemoveContainer" containerID="d3a8e2e7a9fd35fb3169a9c63357378db22bdbb306b6571a57f60ee61f8ba0c8" Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.618337 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87lxp" Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.638803 5034 scope.go:117] "RemoveContainer" containerID="7fb114c8f41d15f3188750d886429947c4064002e447be1e069e1c84f1db6739" Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.659616 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87lxp"] Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.667054 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-87lxp"] Jan 05 22:58:54 crc kubenswrapper[5034]: I0105 22:58:54.680413 5034 scope.go:117] "RemoveContainer" containerID="bd744525ab45ce8c6c01bfbaa24eb00de0eeebd11a1d9e5a7d231529dc5e52cb" Jan 05 22:58:55 crc kubenswrapper[5034]: I0105 22:58:55.846903 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" path="/var/lib/kubelet/pods/49de94bf-eff6-4bcc-8c0f-dcff94b40081/volumes" Jan 05 22:59:20 crc kubenswrapper[5034]: I0105 22:59:20.469223 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:59:20 crc kubenswrapper[5034]: I0105 22:59:20.470164 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:59:48 crc kubenswrapper[5034]: I0105 22:59:48.925059 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5kfdq"] Jan 05 22:59:48 crc kubenswrapper[5034]: E0105 22:59:48.926390 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerName="registry-server" Jan 05 22:59:48 crc kubenswrapper[5034]: I0105 22:59:48.926420 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerName="registry-server" Jan 05 22:59:48 crc kubenswrapper[5034]: E0105 22:59:48.926451 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerName="extract-utilities" Jan 05 22:59:48 crc kubenswrapper[5034]: I0105 22:59:48.926463 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerName="extract-utilities" Jan 05 22:59:48 crc kubenswrapper[5034]: E0105 22:59:48.926483 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerName="extract-content" Jan 05 22:59:48 crc kubenswrapper[5034]: I0105 22:59:48.926497 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerName="extract-content" Jan 05 22:59:48 crc kubenswrapper[5034]: I0105 22:59:48.926819 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="49de94bf-eff6-4bcc-8c0f-dcff94b40081" containerName="registry-server" Jan 05 22:59:48 crc kubenswrapper[5034]: I0105 22:59:48.928530 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:48 crc kubenswrapper[5034]: I0105 22:59:48.940238 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kfdq"] Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.082711 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mkf2\" (UniqueName: \"kubernetes.io/projected/36fe6b4c-a285-43af-aa6a-f957476e2802-kube-api-access-9mkf2\") pod \"certified-operators-5kfdq\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.082861 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-catalog-content\") pod \"certified-operators-5kfdq\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.082902 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-utilities\") pod \"certified-operators-5kfdq\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.184965 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-catalog-content\") pod \"certified-operators-5kfdq\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.185036 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-utilities\") pod \"certified-operators-5kfdq\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.185073 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mkf2\" (UniqueName: \"kubernetes.io/projected/36fe6b4c-a285-43af-aa6a-f957476e2802-kube-api-access-9mkf2\") pod \"certified-operators-5kfdq\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.185583 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-catalog-content\") pod \"certified-operators-5kfdq\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.185613 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-utilities\") pod \"certified-operators-5kfdq\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.209184 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mkf2\" (UniqueName: \"kubernetes.io/projected/36fe6b4c-a285-43af-aa6a-f957476e2802-kube-api-access-9mkf2\") pod \"certified-operators-5kfdq\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.262548 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:49 crc kubenswrapper[5034]: I0105 22:59:49.731733 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kfdq"] Jan 05 22:59:50 crc kubenswrapper[5034]: I0105 22:59:50.034280 5034 generic.go:334] "Generic (PLEG): container finished" podID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerID="768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732" exitCode=0 Jan 05 22:59:50 crc kubenswrapper[5034]: I0105 22:59:50.034379 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kfdq" event={"ID":"36fe6b4c-a285-43af-aa6a-f957476e2802","Type":"ContainerDied","Data":"768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732"} Jan 05 22:59:50 crc kubenswrapper[5034]: I0105 22:59:50.034654 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kfdq" event={"ID":"36fe6b4c-a285-43af-aa6a-f957476e2802","Type":"ContainerStarted","Data":"c4e838cfe96ddd373d3add80ba10c0ed3c47adc3bba905ae13969f49c3fc652e"} Jan 05 22:59:50 crc kubenswrapper[5034]: I0105 22:59:50.470014 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:59:50 crc kubenswrapper[5034]: I0105 22:59:50.470433 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:59:50 crc kubenswrapper[5034]: I0105 22:59:50.470513 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 22:59:50 crc kubenswrapper[5034]: I0105 22:59:50.472462 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec39bd2178547c3ba3d206907fd576195b54616623919092b2fe0f1496523ec9"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:59:50 crc kubenswrapper[5034]: I0105 22:59:50.472573 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://ec39bd2178547c3ba3d206907fd576195b54616623919092b2fe0f1496523ec9" gracePeriod=600 Jan 05 22:59:51 crc kubenswrapper[5034]: I0105 22:59:51.044816 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="ec39bd2178547c3ba3d206907fd576195b54616623919092b2fe0f1496523ec9" exitCode=0 Jan 05 22:59:51 crc kubenswrapper[5034]: I0105 22:59:51.044873 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"ec39bd2178547c3ba3d206907fd576195b54616623919092b2fe0f1496523ec9"} Jan 05 22:59:51 crc kubenswrapper[5034]: I0105 22:59:51.044918 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c"} Jan 05 22:59:51 crc kubenswrapper[5034]: I0105 22:59:51.044945 5034 scope.go:117] "RemoveContainer" containerID="ada107eeae87853a23ff7e04d6935f31c3febd00b8af8d13cf243d292e0ed253" Jan 05 22:59:52 crc kubenswrapper[5034]: I0105 22:59:52.060177 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kfdq" event={"ID":"36fe6b4c-a285-43af-aa6a-f957476e2802","Type":"ContainerStarted","Data":"6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600"} Jan 05 22:59:53 crc kubenswrapper[5034]: I0105 22:59:53.073597 5034 generic.go:334] "Generic (PLEG): container finished" podID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerID="6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600" exitCode=0 Jan 05 22:59:53 crc kubenswrapper[5034]: I0105 22:59:53.073756 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kfdq" event={"ID":"36fe6b4c-a285-43af-aa6a-f957476e2802","Type":"ContainerDied","Data":"6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600"} Jan 05 22:59:54 crc kubenswrapper[5034]: I0105 22:59:54.084744 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kfdq" event={"ID":"36fe6b4c-a285-43af-aa6a-f957476e2802","Type":"ContainerStarted","Data":"dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338"} Jan 05 22:59:54 crc kubenswrapper[5034]: I0105 22:59:54.105880 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5kfdq" podStartSLOduration=2.455876121 podStartE2EDuration="6.105858049s" podCreationTimestamp="2026-01-05 22:59:48 +0000 UTC" firstStartedPulling="2026-01-05 22:59:50.03649592 +0000 UTC m=+4082.408495359" lastFinishedPulling="2026-01-05 22:59:53.686477848 +0000 UTC m=+4086.058477287" observedRunningTime="2026-01-05 22:59:54.104624334 +0000 UTC m=+4086.476623773" watchObservedRunningTime="2026-01-05 22:59:54.105858049 +0000 UTC m=+4086.477857488" Jan 05 22:59:59 crc kubenswrapper[5034]: I0105 22:59:59.263679 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:59 crc kubenswrapper[5034]: I0105 22:59:59.264958 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 22:59:59 crc kubenswrapper[5034]: I0105 22:59:59.304592 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.179918 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.181915 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf"] Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.183511 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.189405 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf"] Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.198287 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.198288 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.243565 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kfdq"] Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.365557 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37fb373d-1a70-40f5-b36a-dc74973a135f-config-volume\") pod \"collect-profiles-29460900-xsmhf\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.365606 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37fb373d-1a70-40f5-b36a-dc74973a135f-secret-volume\") pod \"collect-profiles-29460900-xsmhf\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.365679 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szftz\" (UniqueName: \"kubernetes.io/projected/37fb373d-1a70-40f5-b36a-dc74973a135f-kube-api-access-szftz\") pod \"collect-profiles-29460900-xsmhf\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.466917 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37fb373d-1a70-40f5-b36a-dc74973a135f-config-volume\") pod \"collect-profiles-29460900-xsmhf\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.466965 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37fb373d-1a70-40f5-b36a-dc74973a135f-secret-volume\") pod \"collect-profiles-29460900-xsmhf\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.467036 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szftz\" (UniqueName: \"kubernetes.io/projected/37fb373d-1a70-40f5-b36a-dc74973a135f-kube-api-access-szftz\") pod \"collect-profiles-29460900-xsmhf\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.467857 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37fb373d-1a70-40f5-b36a-dc74973a135f-config-volume\") pod \"collect-profiles-29460900-xsmhf\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.474109 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37fb373d-1a70-40f5-b36a-dc74973a135f-secret-volume\") pod \"collect-profiles-29460900-xsmhf\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.484918 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szftz\" (UniqueName: \"kubernetes.io/projected/37fb373d-1a70-40f5-b36a-dc74973a135f-kube-api-access-szftz\") pod \"collect-profiles-29460900-xsmhf\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.518673 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:00 crc kubenswrapper[5034]: I0105 23:00:00.740361 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf"] Jan 05 23:00:01 crc kubenswrapper[5034]: I0105 23:00:01.141124 5034 generic.go:334] "Generic (PLEG): container finished" podID="37fb373d-1a70-40f5-b36a-dc74973a135f" containerID="e0ef74aaa394c57dd2f92409463f857358259df0ef6ae5f9f19771a84c43b921" exitCode=0 Jan 05 23:00:01 crc kubenswrapper[5034]: I0105 23:00:01.141241 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" event={"ID":"37fb373d-1a70-40f5-b36a-dc74973a135f","Type":"ContainerDied","Data":"e0ef74aaa394c57dd2f92409463f857358259df0ef6ae5f9f19771a84c43b921"} Jan 05 23:00:01 crc kubenswrapper[5034]: I0105 23:00:01.141582 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" event={"ID":"37fb373d-1a70-40f5-b36a-dc74973a135f","Type":"ContainerStarted","Data":"9bc6a9de94874bd91bf9552d7d0982da5d850b06dea60f4857292f6b57421c97"} Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.147600 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5kfdq" podUID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerName="registry-server" containerID="cri-o://dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338" gracePeriod=2 Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.471528 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.503930 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37fb373d-1a70-40f5-b36a-dc74973a135f-secret-volume\") pod \"37fb373d-1a70-40f5-b36a-dc74973a135f\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.504225 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37fb373d-1a70-40f5-b36a-dc74973a135f-config-volume\") pod \"37fb373d-1a70-40f5-b36a-dc74973a135f\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.505031 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fb373d-1a70-40f5-b36a-dc74973a135f-config-volume" (OuterVolumeSpecName: "config-volume") pod "37fb373d-1a70-40f5-b36a-dc74973a135f" (UID: "37fb373d-1a70-40f5-b36a-dc74973a135f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.510397 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37fb373d-1a70-40f5-b36a-dc74973a135f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37fb373d-1a70-40f5-b36a-dc74973a135f" (UID: "37fb373d-1a70-40f5-b36a-dc74973a135f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.605591 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szftz\" (UniqueName: \"kubernetes.io/projected/37fb373d-1a70-40f5-b36a-dc74973a135f-kube-api-access-szftz\") pod \"37fb373d-1a70-40f5-b36a-dc74973a135f\" (UID: \"37fb373d-1a70-40f5-b36a-dc74973a135f\") " Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.605884 5034 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37fb373d-1a70-40f5-b36a-dc74973a135f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.605902 5034 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37fb373d-1a70-40f5-b36a-dc74973a135f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.609209 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fb373d-1a70-40f5-b36a-dc74973a135f-kube-api-access-szftz" (OuterVolumeSpecName: "kube-api-access-szftz") pod "37fb373d-1a70-40f5-b36a-dc74973a135f" (UID: "37fb373d-1a70-40f5-b36a-dc74973a135f"). InnerVolumeSpecName "kube-api-access-szftz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:00:02 crc kubenswrapper[5034]: I0105 23:00:02.706962 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szftz\" (UniqueName: \"kubernetes.io/projected/37fb373d-1a70-40f5-b36a-dc74973a135f-kube-api-access-szftz\") on node \"crc\" DevicePath \"\"" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.118399 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.157983 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" event={"ID":"37fb373d-1a70-40f5-b36a-dc74973a135f","Type":"ContainerDied","Data":"9bc6a9de94874bd91bf9552d7d0982da5d850b06dea60f4857292f6b57421c97"} Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.158109 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bc6a9de94874bd91bf9552d7d0982da5d850b06dea60f4857292f6b57421c97" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.158198 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.161250 5034 generic.go:334] "Generic (PLEG): container finished" podID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerID="dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338" exitCode=0 Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.161335 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kfdq" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.161339 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kfdq" event={"ID":"36fe6b4c-a285-43af-aa6a-f957476e2802","Type":"ContainerDied","Data":"dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338"} Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.161658 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kfdq" event={"ID":"36fe6b4c-a285-43af-aa6a-f957476e2802","Type":"ContainerDied","Data":"c4e838cfe96ddd373d3add80ba10c0ed3c47adc3bba905ae13969f49c3fc652e"} Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.161695 5034 scope.go:117] "RemoveContainer" containerID="dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.179191 5034 scope.go:117] "RemoveContainer" containerID="6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.196981 5034 scope.go:117] "RemoveContainer" containerID="768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.215033 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-catalog-content\") pod \"36fe6b4c-a285-43af-aa6a-f957476e2802\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.215142 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-utilities\") pod \"36fe6b4c-a285-43af-aa6a-f957476e2802\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.215306 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mkf2\" (UniqueName: \"kubernetes.io/projected/36fe6b4c-a285-43af-aa6a-f957476e2802-kube-api-access-9mkf2\") pod \"36fe6b4c-a285-43af-aa6a-f957476e2802\" (UID: \"36fe6b4c-a285-43af-aa6a-f957476e2802\") " Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.216212 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-utilities" (OuterVolumeSpecName: "utilities") pod "36fe6b4c-a285-43af-aa6a-f957476e2802" (UID: "36fe6b4c-a285-43af-aa6a-f957476e2802"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.219397 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36fe6b4c-a285-43af-aa6a-f957476e2802-kube-api-access-9mkf2" (OuterVolumeSpecName: "kube-api-access-9mkf2") pod "36fe6b4c-a285-43af-aa6a-f957476e2802" (UID: "36fe6b4c-a285-43af-aa6a-f957476e2802"). InnerVolumeSpecName "kube-api-access-9mkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.222833 5034 scope.go:117] "RemoveContainer" containerID="dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338" Jan 05 23:00:03 crc kubenswrapper[5034]: E0105 23:00:03.223379 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338\": container with ID starting with dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338 not found: ID does not exist" containerID="dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.223429 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338"} err="failed to get container status \"dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338\": rpc error: code = NotFound desc = could not find container \"dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338\": container with ID starting with dad97ffdaad1e8896878f55381d8af86242a3d29b67a2339f101c5c15941b338 not found: ID does not exist" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.223462 5034 scope.go:117] "RemoveContainer" containerID="6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600" Jan 05 23:00:03 crc kubenswrapper[5034]: E0105 23:00:03.224021 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600\": container with ID starting with 6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600 not found: ID does not exist" containerID="6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.224102 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600"} err="failed to get container status \"6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600\": rpc error: code = NotFound desc = could not find container \"6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600\": container with ID starting with 6382ad3cbf539b816504f54b32b1da61db5e35cc602219979631d2f7c2f69600 not found: ID does not exist" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.224127 5034 scope.go:117] "RemoveContainer" containerID="768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732" Jan 05 23:00:03 crc kubenswrapper[5034]: E0105 23:00:03.224457 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732\": container with ID starting with 768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732 not found: ID does not exist" containerID="768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.224502 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732"} err="failed to get container status \"768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732\": rpc error: code = NotFound desc = could not find container \"768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732\": container with ID starting with 768fc6fbb6eabe29e2460ce497ce398889b6a491cb88664e6c9ebcbeaae6b732 not found: ID does not exist" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.269246 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36fe6b4c-a285-43af-aa6a-f957476e2802" (UID: "36fe6b4c-a285-43af-aa6a-f957476e2802"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.317683 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mkf2\" (UniqueName: \"kubernetes.io/projected/36fe6b4c-a285-43af-aa6a-f957476e2802-kube-api-access-9mkf2\") on node \"crc\" DevicePath \"\"" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.317726 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.317753 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36fe6b4c-a285-43af-aa6a-f957476e2802-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.496611 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kfdq"] Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.502143 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5kfdq"] Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.551668 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8"] Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.556981 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460855-hxqr8"] Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.849899 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36fe6b4c-a285-43af-aa6a-f957476e2802" path="/var/lib/kubelet/pods/36fe6b4c-a285-43af-aa6a-f957476e2802/volumes" Jan 05 23:00:03 crc kubenswrapper[5034]: I0105 23:00:03.850774 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbe1e07-4332-455a-b66c-57b800a25825" path="/var/lib/kubelet/pods/afbe1e07-4332-455a-b66c-57b800a25825/volumes" Jan 05 23:00:57 crc kubenswrapper[5034]: I0105 23:00:57.891644 5034 scope.go:117] "RemoveContainer" containerID="e8b18c34342ca809c0d77ef65c97ca1677af7ff8ffd24416c1c82fcf7970aa0b" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.051180 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-648sx"] Jan 05 23:01:23 crc kubenswrapper[5034]: E0105 23:01:23.053817 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerName="registry-server" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.054139 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerName="registry-server" Jan 05 23:01:23 crc kubenswrapper[5034]: E0105 23:01:23.054274 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerName="extract-utilities" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.055624 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerName="extract-utilities" Jan 05 23:01:23 crc kubenswrapper[5034]: E0105 23:01:23.055743 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fb373d-1a70-40f5-b36a-dc74973a135f" containerName="collect-profiles" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.055828 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fb373d-1a70-40f5-b36a-dc74973a135f" containerName="collect-profiles" Jan 05 23:01:23 crc kubenswrapper[5034]: E0105 23:01:23.055920 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerName="extract-content" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.056012 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerName="extract-content" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.056364 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="36fe6b4c-a285-43af-aa6a-f957476e2802" containerName="registry-server" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.056476 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fb373d-1a70-40f5-b36a-dc74973a135f" containerName="collect-profiles" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.058874 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.063414 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-648sx"] Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.183275 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-utilities\") pod \"redhat-marketplace-648sx\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.183360 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-catalog-content\") pod \"redhat-marketplace-648sx\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.183392 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n2rb\" (UniqueName: \"kubernetes.io/projected/1301dfd4-ca89-492a-8465-778b56a9a120-kube-api-access-6n2rb\") pod \"redhat-marketplace-648sx\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.284681 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-catalog-content\") pod \"redhat-marketplace-648sx\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.284767 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n2rb\" (UniqueName: \"kubernetes.io/projected/1301dfd4-ca89-492a-8465-778b56a9a120-kube-api-access-6n2rb\") pod \"redhat-marketplace-648sx\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.284859 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-utilities\") pod \"redhat-marketplace-648sx\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.285197 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-catalog-content\") pod \"redhat-marketplace-648sx\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.285234 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-utilities\") pod \"redhat-marketplace-648sx\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.303440 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n2rb\" (UniqueName: \"kubernetes.io/projected/1301dfd4-ca89-492a-8465-778b56a9a120-kube-api-access-6n2rb\") pod \"redhat-marketplace-648sx\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.382962 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:23 crc kubenswrapper[5034]: I0105 23:01:23.828731 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-648sx"] Jan 05 23:01:24 crc kubenswrapper[5034]: I0105 23:01:24.857598 5034 generic.go:334] "Generic (PLEG): container finished" podID="1301dfd4-ca89-492a-8465-778b56a9a120" containerID="d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3" exitCode=0 Jan 05 23:01:24 crc kubenswrapper[5034]: I0105 23:01:24.857664 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-648sx" event={"ID":"1301dfd4-ca89-492a-8465-778b56a9a120","Type":"ContainerDied","Data":"d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3"} Jan 05 23:01:24 crc kubenswrapper[5034]: I0105 23:01:24.857953 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-648sx" event={"ID":"1301dfd4-ca89-492a-8465-778b56a9a120","Type":"ContainerStarted","Data":"2b249b0579624937fe61863351bc40ba27bcf9f8a7ad005710982c7ac147f9fa"} Jan 05 23:01:24 crc kubenswrapper[5034]: I0105 23:01:24.859535 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 23:01:25 crc kubenswrapper[5034]: I0105 23:01:25.869484 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-648sx" event={"ID":"1301dfd4-ca89-492a-8465-778b56a9a120","Type":"ContainerStarted","Data":"de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1"} Jan 05 23:01:26 crc kubenswrapper[5034]: I0105 23:01:26.878548 5034 generic.go:334] "Generic (PLEG): container finished" podID="1301dfd4-ca89-492a-8465-778b56a9a120" containerID="de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1" exitCode=0 Jan 05 23:01:26 crc kubenswrapper[5034]: I0105 23:01:26.878597 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-648sx" event={"ID":"1301dfd4-ca89-492a-8465-778b56a9a120","Type":"ContainerDied","Data":"de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1"} Jan 05 23:01:27 crc kubenswrapper[5034]: I0105 23:01:27.889441 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-648sx" event={"ID":"1301dfd4-ca89-492a-8465-778b56a9a120","Type":"ContainerStarted","Data":"75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc"} Jan 05 23:01:27 crc kubenswrapper[5034]: I0105 23:01:27.914882 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-648sx" podStartSLOduration=2.406361319 podStartE2EDuration="4.91486079s" podCreationTimestamp="2026-01-05 23:01:23 +0000 UTC" firstStartedPulling="2026-01-05 23:01:24.859328104 +0000 UTC m=+4177.231327533" lastFinishedPulling="2026-01-05 23:01:27.367827555 +0000 UTC m=+4179.739827004" observedRunningTime="2026-01-05 23:01:27.914448489 +0000 UTC m=+4180.286447928" watchObservedRunningTime="2026-01-05 23:01:27.91486079 +0000 UTC m=+4180.286860229" Jan 05 23:01:33 crc kubenswrapper[5034]: I0105 23:01:33.384005 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:33 crc kubenswrapper[5034]: I0105 23:01:33.385261 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:33 crc kubenswrapper[5034]: I0105 23:01:33.428645 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:33 crc kubenswrapper[5034]: I0105 23:01:33.983157 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:34 crc kubenswrapper[5034]: I0105 23:01:34.040035 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-648sx"] Jan 05 23:01:35 crc kubenswrapper[5034]: I0105 23:01:35.953661 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-648sx" podUID="1301dfd4-ca89-492a-8465-778b56a9a120" containerName="registry-server" containerID="cri-o://75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc" gracePeriod=2 Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.342412 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.518703 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n2rb\" (UniqueName: \"kubernetes.io/projected/1301dfd4-ca89-492a-8465-778b56a9a120-kube-api-access-6n2rb\") pod \"1301dfd4-ca89-492a-8465-778b56a9a120\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.518803 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-utilities\") pod \"1301dfd4-ca89-492a-8465-778b56a9a120\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.518835 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-catalog-content\") pod \"1301dfd4-ca89-492a-8465-778b56a9a120\" (UID: \"1301dfd4-ca89-492a-8465-778b56a9a120\") " Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.519681 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-utilities" (OuterVolumeSpecName: "utilities") pod "1301dfd4-ca89-492a-8465-778b56a9a120" (UID: "1301dfd4-ca89-492a-8465-778b56a9a120"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.524139 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1301dfd4-ca89-492a-8465-778b56a9a120-kube-api-access-6n2rb" (OuterVolumeSpecName: "kube-api-access-6n2rb") pod "1301dfd4-ca89-492a-8465-778b56a9a120" (UID: "1301dfd4-ca89-492a-8465-778b56a9a120"). InnerVolumeSpecName "kube-api-access-6n2rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.552027 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1301dfd4-ca89-492a-8465-778b56a9a120" (UID: "1301dfd4-ca89-492a-8465-778b56a9a120"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.621040 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n2rb\" (UniqueName: \"kubernetes.io/projected/1301dfd4-ca89-492a-8465-778b56a9a120-kube-api-access-6n2rb\") on node \"crc\" DevicePath \"\"" Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.621110 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.621123 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1301dfd4-ca89-492a-8465-778b56a9a120-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.966182 5034 generic.go:334] "Generic (PLEG): container finished" podID="1301dfd4-ca89-492a-8465-778b56a9a120" containerID="75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc" exitCode=0 Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.966248 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-648sx" event={"ID":"1301dfd4-ca89-492a-8465-778b56a9a120","Type":"ContainerDied","Data":"75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc"} Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.966296 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-648sx" event={"ID":"1301dfd4-ca89-492a-8465-778b56a9a120","Type":"ContainerDied","Data":"2b249b0579624937fe61863351bc40ba27bcf9f8a7ad005710982c7ac147f9fa"} Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.966325 5034 scope.go:117] "RemoveContainer" containerID="75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc" Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.966389 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-648sx" Jan 05 23:01:36 crc kubenswrapper[5034]: I0105 23:01:36.989682 5034 scope.go:117] "RemoveContainer" containerID="de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1" Jan 05 23:01:37 crc kubenswrapper[5034]: I0105 23:01:37.015468 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-648sx"] Jan 05 23:01:37 crc kubenswrapper[5034]: I0105 23:01:37.021444 5034 scope.go:117] "RemoveContainer" containerID="d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3" Jan 05 23:01:37 crc kubenswrapper[5034]: I0105 23:01:37.021620 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-648sx"] Jan 05 23:01:37 crc kubenswrapper[5034]: I0105 23:01:37.041311 5034 scope.go:117] "RemoveContainer" containerID="75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc" Jan 05 23:01:37 crc kubenswrapper[5034]: E0105 23:01:37.041741 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc\": container with ID starting with 75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc not found: ID does not exist" containerID="75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc" Jan 05 23:01:37 crc kubenswrapper[5034]: I0105 23:01:37.041776 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc"} err="failed to get container status \"75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc\": rpc error: code = NotFound desc = could not find container \"75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc\": container with ID starting with 75ba4b8f075e9e5de87525fa5b26602e712cb65d45938244e867959412fc36dc not found: ID does not exist" Jan 05 23:01:37 crc kubenswrapper[5034]: I0105 23:01:37.041821 5034 scope.go:117] "RemoveContainer" containerID="de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1" Jan 05 23:01:37 crc kubenswrapper[5034]: E0105 23:01:37.042234 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1\": container with ID starting with de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1 not found: ID does not exist" containerID="de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1" Jan 05 23:01:37 crc kubenswrapper[5034]: I0105 23:01:37.042295 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1"} err="failed to get container status \"de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1\": rpc error: code = NotFound desc = could not find container \"de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1\": container with ID starting with de1d39c68f18ea371e1a8798042f141ad35febab6078746216b05e49323073e1 not found: ID does not exist" Jan 05 23:01:37 crc kubenswrapper[5034]: I0105 23:01:37.042345 5034 scope.go:117] "RemoveContainer" containerID="d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3" Jan 05 23:01:37 crc kubenswrapper[5034]: E0105 23:01:37.042645 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3\": container with ID starting with d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3 not found: ID does not exist" containerID="d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3" Jan 05 23:01:37 crc kubenswrapper[5034]: I0105 23:01:37.042677 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3"} err="failed to get container status \"d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3\": rpc error: code = NotFound desc = could not find container \"d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3\": container with ID starting with d64295288c63aefd8aacdc4eeed09fa9ee89c5f7e867f459e36c7b27bf0603a3 not found: ID does not exist" Jan 05 23:01:37 crc kubenswrapper[5034]: I0105 23:01:37.847744 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1301dfd4-ca89-492a-8465-778b56a9a120" path="/var/lib/kubelet/pods/1301dfd4-ca89-492a-8465-778b56a9a120/volumes" Jan 05 23:01:50 crc kubenswrapper[5034]: I0105 23:01:50.469862 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:01:50 crc kubenswrapper[5034]: I0105 23:01:50.470835 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:02:20 crc kubenswrapper[5034]: I0105 23:02:20.469042 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:02:20 crc kubenswrapper[5034]: I0105 23:02:20.469592 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:02:50 crc kubenswrapper[5034]: I0105 23:02:50.469164 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:02:50 crc kubenswrapper[5034]: I0105 23:02:50.469862 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:02:50 crc kubenswrapper[5034]: I0105 23:02:50.469920 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 23:02:50 crc kubenswrapper[5034]: I0105 23:02:50.470711 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 23:02:50 crc kubenswrapper[5034]: I0105 23:02:50.470776 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" gracePeriod=600 Jan 05 23:02:50 crc kubenswrapper[5034]: E0105 23:02:50.602908 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:02:51 crc kubenswrapper[5034]: I0105 23:02:51.586593 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" exitCode=0 Jan 05 23:02:51 crc kubenswrapper[5034]: I0105 23:02:51.586625 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c"} Jan 05 23:02:51 crc kubenswrapper[5034]: I0105 23:02:51.586952 5034 scope.go:117] "RemoveContainer" containerID="ec39bd2178547c3ba3d206907fd576195b54616623919092b2fe0f1496523ec9" Jan 05 23:02:51 crc kubenswrapper[5034]: I0105 23:02:51.587550 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:02:51 crc kubenswrapper[5034]: E0105 23:02:51.587783 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:03:01 crc kubenswrapper[5034]: I0105 23:03:01.838865 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:03:01 crc kubenswrapper[5034]: E0105 23:03:01.839718 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:03:14 crc kubenswrapper[5034]: I0105 23:03:14.838517 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:03:14 crc kubenswrapper[5034]: E0105 23:03:14.839508 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:03:26 crc kubenswrapper[5034]: I0105 23:03:26.838560 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:03:26 crc kubenswrapper[5034]: E0105 23:03:26.839297 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:03:41 crc kubenswrapper[5034]: I0105 23:03:41.839407 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:03:41 crc kubenswrapper[5034]: E0105 23:03:41.840192 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:03:56 crc kubenswrapper[5034]: I0105 23:03:56.839423 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:03:56 crc kubenswrapper[5034]: E0105 23:03:56.840515 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:04:10 crc kubenswrapper[5034]: I0105 23:04:10.839018 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:04:10 crc kubenswrapper[5034]: E0105 23:04:10.839739 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:04:25 crc kubenswrapper[5034]: I0105 23:04:25.839623 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:04:25 crc kubenswrapper[5034]: E0105 23:04:25.840919 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:04:39 crc kubenswrapper[5034]: I0105 23:04:39.838476 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:04:39 crc kubenswrapper[5034]: E0105 23:04:39.840416 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:04:54 crc kubenswrapper[5034]: I0105 23:04:54.839689 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:04:54 crc kubenswrapper[5034]: E0105 23:04:54.841599 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:05:05 crc kubenswrapper[5034]: I0105 23:05:05.838574 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:05:05 crc kubenswrapper[5034]: E0105 23:05:05.839789 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:05:17 crc kubenswrapper[5034]: I0105 23:05:17.849879 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:05:17 crc kubenswrapper[5034]: E0105 23:05:17.850975 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:05:31 crc kubenswrapper[5034]: I0105 23:05:31.838069 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:05:31 crc kubenswrapper[5034]: E0105 23:05:31.839070 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:05:45 crc kubenswrapper[5034]: I0105 23:05:45.839501 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:05:45 crc kubenswrapper[5034]: E0105 23:05:45.842609 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:05:59 crc kubenswrapper[5034]: I0105 23:05:59.839519 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:05:59 crc kubenswrapper[5034]: E0105 23:05:59.840256 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:06:00 crc kubenswrapper[5034]: I0105 23:06:00.926809 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pv5fj"] Jan 05 23:06:00 crc kubenswrapper[5034]: E0105 23:06:00.927454 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1301dfd4-ca89-492a-8465-778b56a9a120" containerName="extract-content" Jan 05 23:06:00 crc kubenswrapper[5034]: I0105 23:06:00.927471 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1301dfd4-ca89-492a-8465-778b56a9a120" containerName="extract-content" Jan 05 23:06:00 crc kubenswrapper[5034]: E0105 23:06:00.927499 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1301dfd4-ca89-492a-8465-778b56a9a120" containerName="extract-utilities" Jan 05 23:06:00 crc kubenswrapper[5034]: I0105 23:06:00.927505 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1301dfd4-ca89-492a-8465-778b56a9a120" containerName="extract-utilities" Jan 05 23:06:00 crc kubenswrapper[5034]: E0105 23:06:00.927514 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1301dfd4-ca89-492a-8465-778b56a9a120" containerName="registry-server" Jan 05 23:06:00 crc kubenswrapper[5034]: I0105 23:06:00.927520 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1301dfd4-ca89-492a-8465-778b56a9a120" containerName="registry-server" Jan 05 23:06:00 crc kubenswrapper[5034]: I0105 23:06:00.929220 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1301dfd4-ca89-492a-8465-778b56a9a120" containerName="registry-server" Jan 05 23:06:00 crc kubenswrapper[5034]: I0105 23:06:00.930909 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:00 crc kubenswrapper[5034]: I0105 23:06:00.940998 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pv5fj"] Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.074007 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99dvp\" (UniqueName: \"kubernetes.io/projected/392c6e68-65a6-4f1c-82d3-24ee335b91db-kube-api-access-99dvp\") pod \"community-operators-pv5fj\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.074647 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-catalog-content\") pod \"community-operators-pv5fj\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.074821 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-utilities\") pod \"community-operators-pv5fj\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.176943 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99dvp\" (UniqueName: \"kubernetes.io/projected/392c6e68-65a6-4f1c-82d3-24ee335b91db-kube-api-access-99dvp\") pod \"community-operators-pv5fj\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.177018 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-catalog-content\") pod \"community-operators-pv5fj\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.177183 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-utilities\") pod \"community-operators-pv5fj\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.177686 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-catalog-content\") pod \"community-operators-pv5fj\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.177718 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-utilities\") pod \"community-operators-pv5fj\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.200616 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99dvp\" (UniqueName: \"kubernetes.io/projected/392c6e68-65a6-4f1c-82d3-24ee335b91db-kube-api-access-99dvp\") pod \"community-operators-pv5fj\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.255338 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:01 crc kubenswrapper[5034]: I0105 23:06:01.746717 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pv5fj"] Jan 05 23:06:02 crc kubenswrapper[5034]: I0105 23:06:02.130579 5034 generic.go:334] "Generic (PLEG): container finished" podID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerID="6c76e19f2e63f31d065287f7b3c8e7c5a4e836195bbdaa8821bd83f1c73921f6" exitCode=0 Jan 05 23:06:02 crc kubenswrapper[5034]: I0105 23:06:02.130634 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv5fj" event={"ID":"392c6e68-65a6-4f1c-82d3-24ee335b91db","Type":"ContainerDied","Data":"6c76e19f2e63f31d065287f7b3c8e7c5a4e836195bbdaa8821bd83f1c73921f6"} Jan 05 23:06:02 crc kubenswrapper[5034]: I0105 23:06:02.130689 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv5fj" event={"ID":"392c6e68-65a6-4f1c-82d3-24ee335b91db","Type":"ContainerStarted","Data":"475e2ede224b08cae025a28f0831539329fc5a9e212008c3dd2749a3c24f097b"} Jan 05 23:06:03 crc kubenswrapper[5034]: I0105 23:06:03.141062 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv5fj" event={"ID":"392c6e68-65a6-4f1c-82d3-24ee335b91db","Type":"ContainerStarted","Data":"84b5f5cdcaef82f28a386a4abdd97ec9fe2867a8d23ddd55786045a3dc636f4d"} Jan 05 23:06:04 crc kubenswrapper[5034]: I0105 23:06:04.151094 5034 generic.go:334] "Generic (PLEG): container finished" podID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerID="84b5f5cdcaef82f28a386a4abdd97ec9fe2867a8d23ddd55786045a3dc636f4d" exitCode=0 Jan 05 23:06:04 crc kubenswrapper[5034]: I0105 23:06:04.151189 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv5fj" event={"ID":"392c6e68-65a6-4f1c-82d3-24ee335b91db","Type":"ContainerDied","Data":"84b5f5cdcaef82f28a386a4abdd97ec9fe2867a8d23ddd55786045a3dc636f4d"} Jan 05 23:06:05 crc kubenswrapper[5034]: I0105 23:06:05.163885 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv5fj" event={"ID":"392c6e68-65a6-4f1c-82d3-24ee335b91db","Type":"ContainerStarted","Data":"d795684eea5e087ce8c73f76d9af3e0235ea4142d0f2bb7f5a1f6c61edf5c30c"} Jan 05 23:06:05 crc kubenswrapper[5034]: I0105 23:06:05.190358 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pv5fj" podStartSLOduration=2.593016661 podStartE2EDuration="5.190325906s" podCreationTimestamp="2026-01-05 23:06:00 +0000 UTC" firstStartedPulling="2026-01-05 23:06:02.135351104 +0000 UTC m=+4454.507350543" lastFinishedPulling="2026-01-05 23:06:04.732660339 +0000 UTC m=+4457.104659788" observedRunningTime="2026-01-05 23:06:05.185832478 +0000 UTC m=+4457.557831927" watchObservedRunningTime="2026-01-05 23:06:05.190325906 +0000 UTC m=+4457.562325345" Jan 05 23:06:11 crc kubenswrapper[5034]: I0105 23:06:11.256285 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:11 crc kubenswrapper[5034]: I0105 23:06:11.256915 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:11 crc kubenswrapper[5034]: I0105 23:06:11.296070 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:11 crc kubenswrapper[5034]: I0105 23:06:11.838376 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:06:11 crc kubenswrapper[5034]: E0105 23:06:11.838653 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:06:12 crc kubenswrapper[5034]: I0105 23:06:12.269739 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:12 crc kubenswrapper[5034]: I0105 23:06:12.324412 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pv5fj"] Jan 05 23:06:14 crc kubenswrapper[5034]: I0105 23:06:14.234388 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pv5fj" podUID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerName="registry-server" containerID="cri-o://d795684eea5e087ce8c73f76d9af3e0235ea4142d0f2bb7f5a1f6c61edf5c30c" gracePeriod=2 Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.244775 5034 generic.go:334] "Generic (PLEG): container finished" podID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerID="d795684eea5e087ce8c73f76d9af3e0235ea4142d0f2bb7f5a1f6c61edf5c30c" exitCode=0 Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.244845 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv5fj" event={"ID":"392c6e68-65a6-4f1c-82d3-24ee335b91db","Type":"ContainerDied","Data":"d795684eea5e087ce8c73f76d9af3e0235ea4142d0f2bb7f5a1f6c61edf5c30c"} Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.340664 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.411721 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99dvp\" (UniqueName: \"kubernetes.io/projected/392c6e68-65a6-4f1c-82d3-24ee335b91db-kube-api-access-99dvp\") pod \"392c6e68-65a6-4f1c-82d3-24ee335b91db\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.411923 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-catalog-content\") pod \"392c6e68-65a6-4f1c-82d3-24ee335b91db\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.412141 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-utilities\") pod \"392c6e68-65a6-4f1c-82d3-24ee335b91db\" (UID: \"392c6e68-65a6-4f1c-82d3-24ee335b91db\") " Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.413509 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-utilities" (OuterVolumeSpecName: "utilities") pod "392c6e68-65a6-4f1c-82d3-24ee335b91db" (UID: "392c6e68-65a6-4f1c-82d3-24ee335b91db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.417527 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392c6e68-65a6-4f1c-82d3-24ee335b91db-kube-api-access-99dvp" (OuterVolumeSpecName: "kube-api-access-99dvp") pod "392c6e68-65a6-4f1c-82d3-24ee335b91db" (UID: "392c6e68-65a6-4f1c-82d3-24ee335b91db"). InnerVolumeSpecName "kube-api-access-99dvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.461553 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "392c6e68-65a6-4f1c-82d3-24ee335b91db" (UID: "392c6e68-65a6-4f1c-82d3-24ee335b91db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.514347 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99dvp\" (UniqueName: \"kubernetes.io/projected/392c6e68-65a6-4f1c-82d3-24ee335b91db-kube-api-access-99dvp\") on node \"crc\" DevicePath \"\"" Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.514388 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:06:15 crc kubenswrapper[5034]: I0105 23:06:15.514399 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392c6e68-65a6-4f1c-82d3-24ee335b91db-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:06:16 crc kubenswrapper[5034]: I0105 23:06:16.256283 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv5fj" event={"ID":"392c6e68-65a6-4f1c-82d3-24ee335b91db","Type":"ContainerDied","Data":"475e2ede224b08cae025a28f0831539329fc5a9e212008c3dd2749a3c24f097b"} Jan 05 23:06:16 crc kubenswrapper[5034]: I0105 23:06:16.256384 5034 scope.go:117] "RemoveContainer" containerID="d795684eea5e087ce8c73f76d9af3e0235ea4142d0f2bb7f5a1f6c61edf5c30c" Jan 05 23:06:16 crc kubenswrapper[5034]: I0105 23:06:16.257888 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv5fj" Jan 05 23:06:16 crc kubenswrapper[5034]: I0105 23:06:16.289206 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pv5fj"] Jan 05 23:06:16 crc kubenswrapper[5034]: I0105 23:06:16.291154 5034 scope.go:117] "RemoveContainer" containerID="84b5f5cdcaef82f28a386a4abdd97ec9fe2867a8d23ddd55786045a3dc636f4d" Jan 05 23:06:16 crc kubenswrapper[5034]: I0105 23:06:16.298823 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pv5fj"] Jan 05 23:06:16 crc kubenswrapper[5034]: I0105 23:06:16.314280 5034 scope.go:117] "RemoveContainer" containerID="6c76e19f2e63f31d065287f7b3c8e7c5a4e836195bbdaa8821bd83f1c73921f6" Jan 05 23:06:17 crc kubenswrapper[5034]: I0105 23:06:17.847873 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392c6e68-65a6-4f1c-82d3-24ee335b91db" path="/var/lib/kubelet/pods/392c6e68-65a6-4f1c-82d3-24ee335b91db/volumes" Jan 05 23:06:24 crc kubenswrapper[5034]: I0105 23:06:24.838649 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:06:24 crc kubenswrapper[5034]: E0105 23:06:24.839498 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:06:36 crc kubenswrapper[5034]: I0105 23:06:36.838730 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:06:36 crc kubenswrapper[5034]: E0105 23:06:36.839638 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:06:50 crc kubenswrapper[5034]: I0105 23:06:50.838782 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:06:50 crc kubenswrapper[5034]: E0105 23:06:50.839842 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:07:01 crc kubenswrapper[5034]: I0105 23:07:01.838733 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:07:01 crc kubenswrapper[5034]: E0105 23:07:01.839257 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:07:12 crc kubenswrapper[5034]: I0105 23:07:12.838689 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:07:12 crc kubenswrapper[5034]: E0105 23:07:12.839632 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:07:27 crc kubenswrapper[5034]: I0105 23:07:27.843203 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:07:27 crc kubenswrapper[5034]: E0105 23:07:27.844139 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:07:38 crc kubenswrapper[5034]: I0105 23:07:38.838585 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:07:38 crc kubenswrapper[5034]: E0105 23:07:38.839391 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:07:50 crc kubenswrapper[5034]: I0105 23:07:50.837953 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:07:52 crc kubenswrapper[5034]: I0105 23:07:52.023674 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"6958d2db4f5b2fc15030cb3b9a7b3e4850057b9ff8407ab2940c23e91d80bc16"} Jan 05 23:10:20 crc kubenswrapper[5034]: I0105 23:10:20.469182 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:10:20 crc kubenswrapper[5034]: I0105 23:10:20.471265 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.827056 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xrwkx"] Jan 05 23:10:42 crc kubenswrapper[5034]: E0105 23:10:42.828495 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerName="registry-server" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.828516 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerName="registry-server" Jan 05 23:10:42 crc kubenswrapper[5034]: E0105 23:10:42.828528 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerName="extract-utilities" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.828537 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerName="extract-utilities" Jan 05 23:10:42 crc kubenswrapper[5034]: E0105 23:10:42.828553 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerName="extract-content" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.828563 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerName="extract-content" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.828752 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="392c6e68-65a6-4f1c-82d3-24ee335b91db" containerName="registry-server" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.830226 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.832883 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-catalog-content\") pod \"certified-operators-xrwkx\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.832942 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npkcv\" (UniqueName: \"kubernetes.io/projected/a90da278-5b67-43a7-bf2b-e0564010092d-kube-api-access-npkcv\") pod \"certified-operators-xrwkx\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.832964 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-utilities\") pod \"certified-operators-xrwkx\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.844526 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xrwkx"] Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.934571 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-catalog-content\") pod \"certified-operators-xrwkx\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.934627 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npkcv\" (UniqueName: \"kubernetes.io/projected/a90da278-5b67-43a7-bf2b-e0564010092d-kube-api-access-npkcv\") pod \"certified-operators-xrwkx\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.934647 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-utilities\") pod \"certified-operators-xrwkx\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.935301 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-utilities\") pod \"certified-operators-xrwkx\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.935410 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-catalog-content\") pod \"certified-operators-xrwkx\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:42 crc kubenswrapper[5034]: I0105 23:10:42.955307 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npkcv\" (UniqueName: \"kubernetes.io/projected/a90da278-5b67-43a7-bf2b-e0564010092d-kube-api-access-npkcv\") pod \"certified-operators-xrwkx\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:43 crc kubenswrapper[5034]: I0105 23:10:43.152579 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:43 crc kubenswrapper[5034]: I0105 23:10:43.722042 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xrwkx"] Jan 05 23:10:44 crc kubenswrapper[5034]: I0105 23:10:44.493020 5034 generic.go:334] "Generic (PLEG): container finished" podID="a90da278-5b67-43a7-bf2b-e0564010092d" containerID="8371fbb8ec819e0dd411cf2e77640084c0d88be96c4491daff7d7048547112e0" exitCode=0 Jan 05 23:10:44 crc kubenswrapper[5034]: I0105 23:10:44.493137 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrwkx" event={"ID":"a90da278-5b67-43a7-bf2b-e0564010092d","Type":"ContainerDied","Data":"8371fbb8ec819e0dd411cf2e77640084c0d88be96c4491daff7d7048547112e0"} Jan 05 23:10:44 crc kubenswrapper[5034]: I0105 23:10:44.493334 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrwkx" event={"ID":"a90da278-5b67-43a7-bf2b-e0564010092d","Type":"ContainerStarted","Data":"dec7fe18f98b297eca900b9d1f8d3f9e31b291a16a28be928defc34a93355bfa"} Jan 05 23:10:44 crc kubenswrapper[5034]: I0105 23:10:44.494893 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 23:10:45 crc kubenswrapper[5034]: I0105 23:10:45.504835 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrwkx" event={"ID":"a90da278-5b67-43a7-bf2b-e0564010092d","Type":"ContainerStarted","Data":"1acf03549cc01d37de447b632b872f57cae70b1cb89aaf0d9b836d2bc3a3bb4b"} Jan 05 23:10:46 crc kubenswrapper[5034]: I0105 23:10:46.515876 5034 generic.go:334] "Generic (PLEG): container finished" podID="a90da278-5b67-43a7-bf2b-e0564010092d" containerID="1acf03549cc01d37de447b632b872f57cae70b1cb89aaf0d9b836d2bc3a3bb4b" exitCode=0 Jan 05 23:10:46 crc kubenswrapper[5034]: I0105 23:10:46.515947 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrwkx" event={"ID":"a90da278-5b67-43a7-bf2b-e0564010092d","Type":"ContainerDied","Data":"1acf03549cc01d37de447b632b872f57cae70b1cb89aaf0d9b836d2bc3a3bb4b"} Jan 05 23:10:47 crc kubenswrapper[5034]: I0105 23:10:47.526955 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrwkx" event={"ID":"a90da278-5b67-43a7-bf2b-e0564010092d","Type":"ContainerStarted","Data":"02015bad8c0f23a1fddbfd427a543e20176324e3132c82cf67390e1d5e8f15d6"} Jan 05 23:10:47 crc kubenswrapper[5034]: I0105 23:10:47.550328 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xrwkx" podStartSLOduration=3.126053435 podStartE2EDuration="5.550300871s" podCreationTimestamp="2026-01-05 23:10:42 +0000 UTC" firstStartedPulling="2026-01-05 23:10:44.494653381 +0000 UTC m=+4736.866652820" lastFinishedPulling="2026-01-05 23:10:46.918900817 +0000 UTC m=+4739.290900256" observedRunningTime="2026-01-05 23:10:47.545544616 +0000 UTC m=+4739.917544075" watchObservedRunningTime="2026-01-05 23:10:47.550300871 +0000 UTC m=+4739.922300310" Jan 05 23:10:50 crc kubenswrapper[5034]: I0105 23:10:50.468991 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:10:50 crc kubenswrapper[5034]: I0105 23:10:50.469502 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:10:53 crc kubenswrapper[5034]: I0105 23:10:53.153129 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:53 crc kubenswrapper[5034]: I0105 23:10:53.153480 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:53 crc kubenswrapper[5034]: I0105 23:10:53.215683 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:53 crc kubenswrapper[5034]: I0105 23:10:53.911327 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:53 crc kubenswrapper[5034]: I0105 23:10:53.955867 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xrwkx"] Jan 05 23:10:55 crc kubenswrapper[5034]: I0105 23:10:55.594866 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xrwkx" podUID="a90da278-5b67-43a7-bf2b-e0564010092d" containerName="registry-server" containerID="cri-o://02015bad8c0f23a1fddbfd427a543e20176324e3132c82cf67390e1d5e8f15d6" gracePeriod=2 Jan 05 23:10:56 crc kubenswrapper[5034]: I0105 23:10:56.606816 5034 generic.go:334] "Generic (PLEG): container finished" podID="a90da278-5b67-43a7-bf2b-e0564010092d" containerID="02015bad8c0f23a1fddbfd427a543e20176324e3132c82cf67390e1d5e8f15d6" exitCode=0 Jan 05 23:10:56 crc kubenswrapper[5034]: I0105 23:10:56.607242 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrwkx" event={"ID":"a90da278-5b67-43a7-bf2b-e0564010092d","Type":"ContainerDied","Data":"02015bad8c0f23a1fddbfd427a543e20176324e3132c82cf67390e1d5e8f15d6"} Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.052971 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.251338 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-utilities\") pod \"a90da278-5b67-43a7-bf2b-e0564010092d\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.251428 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-catalog-content\") pod \"a90da278-5b67-43a7-bf2b-e0564010092d\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.251492 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npkcv\" (UniqueName: \"kubernetes.io/projected/a90da278-5b67-43a7-bf2b-e0564010092d-kube-api-access-npkcv\") pod \"a90da278-5b67-43a7-bf2b-e0564010092d\" (UID: \"a90da278-5b67-43a7-bf2b-e0564010092d\") " Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.257902 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-utilities" (OuterVolumeSpecName: "utilities") pod "a90da278-5b67-43a7-bf2b-e0564010092d" (UID: "a90da278-5b67-43a7-bf2b-e0564010092d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.263552 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90da278-5b67-43a7-bf2b-e0564010092d-kube-api-access-npkcv" (OuterVolumeSpecName: "kube-api-access-npkcv") pod "a90da278-5b67-43a7-bf2b-e0564010092d" (UID: "a90da278-5b67-43a7-bf2b-e0564010092d"). InnerVolumeSpecName "kube-api-access-npkcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.307572 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a90da278-5b67-43a7-bf2b-e0564010092d" (UID: "a90da278-5b67-43a7-bf2b-e0564010092d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.353300 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.353340 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90da278-5b67-43a7-bf2b-e0564010092d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.353353 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npkcv\" (UniqueName: \"kubernetes.io/projected/a90da278-5b67-43a7-bf2b-e0564010092d-kube-api-access-npkcv\") on node \"crc\" DevicePath \"\"" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.618050 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrwkx" event={"ID":"a90da278-5b67-43a7-bf2b-e0564010092d","Type":"ContainerDied","Data":"dec7fe18f98b297eca900b9d1f8d3f9e31b291a16a28be928defc34a93355bfa"} Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.618147 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrwkx" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.618207 5034 scope.go:117] "RemoveContainer" containerID="02015bad8c0f23a1fddbfd427a543e20176324e3132c82cf67390e1d5e8f15d6" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.648796 5034 scope.go:117] "RemoveContainer" containerID="1acf03549cc01d37de447b632b872f57cae70b1cb89aaf0d9b836d2bc3a3bb4b" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.654895 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xrwkx"] Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.661666 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xrwkx"] Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.686465 5034 scope.go:117] "RemoveContainer" containerID="8371fbb8ec819e0dd411cf2e77640084c0d88be96c4491daff7d7048547112e0" Jan 05 23:10:57 crc kubenswrapper[5034]: I0105 23:10:57.851091 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a90da278-5b67-43a7-bf2b-e0564010092d" path="/var/lib/kubelet/pods/a90da278-5b67-43a7-bf2b-e0564010092d/volumes" Jan 05 23:11:20 crc kubenswrapper[5034]: I0105 23:11:20.468695 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:11:20 crc kubenswrapper[5034]: I0105 23:11:20.469492 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:11:20 crc kubenswrapper[5034]: I0105 23:11:20.469615 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 23:11:20 crc kubenswrapper[5034]: I0105 23:11:20.470767 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6958d2db4f5b2fc15030cb3b9a7b3e4850057b9ff8407ab2940c23e91d80bc16"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 23:11:20 crc kubenswrapper[5034]: I0105 23:11:20.470870 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://6958d2db4f5b2fc15030cb3b9a7b3e4850057b9ff8407ab2940c23e91d80bc16" gracePeriod=600 Jan 05 23:11:20 crc kubenswrapper[5034]: I0105 23:11:20.809255 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="6958d2db4f5b2fc15030cb3b9a7b3e4850057b9ff8407ab2940c23e91d80bc16" exitCode=0 Jan 05 23:11:20 crc kubenswrapper[5034]: I0105 23:11:20.809316 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"6958d2db4f5b2fc15030cb3b9a7b3e4850057b9ff8407ab2940c23e91d80bc16"} Jan 05 23:11:20 crc kubenswrapper[5034]: I0105 23:11:20.809628 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f"} Jan 05 23:11:20 crc kubenswrapper[5034]: I0105 23:11:20.809650 5034 scope.go:117] "RemoveContainer" containerID="a67a3154ebfa860a55f0dbd0d927b3dfd445bb6f9e57ffa7389de4544142c74c" Jan 05 23:11:38 crc kubenswrapper[5034]: I0105 23:11:38.884317 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-d6lqk"] Jan 05 23:11:38 crc kubenswrapper[5034]: I0105 23:11:38.893216 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-d6lqk"] Jan 05 23:11:38 crc kubenswrapper[5034]: I0105 23:11:38.999102 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-pwtdx"] Jan 05 23:11:38 crc kubenswrapper[5034]: E0105 23:11:38.999592 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90da278-5b67-43a7-bf2b-e0564010092d" containerName="extract-utilities" Jan 05 23:11:38 crc kubenswrapper[5034]: I0105 23:11:38.999622 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90da278-5b67-43a7-bf2b-e0564010092d" containerName="extract-utilities" Jan 05 23:11:38 crc kubenswrapper[5034]: E0105 23:11:38.999640 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90da278-5b67-43a7-bf2b-e0564010092d" containerName="registry-server" Jan 05 23:11:38 crc kubenswrapper[5034]: I0105 23:11:38.999649 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90da278-5b67-43a7-bf2b-e0564010092d" containerName="registry-server" Jan 05 23:11:38 crc kubenswrapper[5034]: E0105 23:11:38.999667 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90da278-5b67-43a7-bf2b-e0564010092d" containerName="extract-content" Jan 05 23:11:38 crc kubenswrapper[5034]: I0105 23:11:38.999677 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90da278-5b67-43a7-bf2b-e0564010092d" containerName="extract-content" Jan 05 23:11:38 crc kubenswrapper[5034]: I0105 23:11:38.999846 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a90da278-5b67-43a7-bf2b-e0564010092d" containerName="registry-server" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.001529 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.005353 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.005361 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.005361 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.006272 5034 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-ck854" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.011843 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-pwtdx"] Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.074052 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5f417f77-bf35-42f1-ba29-a7871ab9a715-crc-storage\") pod \"crc-storage-crc-pwtdx\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.074142 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9gsm\" (UniqueName: \"kubernetes.io/projected/5f417f77-bf35-42f1-ba29-a7871ab9a715-kube-api-access-s9gsm\") pod \"crc-storage-crc-pwtdx\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.074205 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5f417f77-bf35-42f1-ba29-a7871ab9a715-node-mnt\") pod \"crc-storage-crc-pwtdx\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.175584 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5f417f77-bf35-42f1-ba29-a7871ab9a715-crc-storage\") pod \"crc-storage-crc-pwtdx\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.175652 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9gsm\" (UniqueName: \"kubernetes.io/projected/5f417f77-bf35-42f1-ba29-a7871ab9a715-kube-api-access-s9gsm\") pod \"crc-storage-crc-pwtdx\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.175694 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5f417f77-bf35-42f1-ba29-a7871ab9a715-node-mnt\") pod \"crc-storage-crc-pwtdx\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.176227 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5f417f77-bf35-42f1-ba29-a7871ab9a715-node-mnt\") pod \"crc-storage-crc-pwtdx\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.176730 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5f417f77-bf35-42f1-ba29-a7871ab9a715-crc-storage\") pod \"crc-storage-crc-pwtdx\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.200590 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9gsm\" (UniqueName: \"kubernetes.io/projected/5f417f77-bf35-42f1-ba29-a7871ab9a715-kube-api-access-s9gsm\") pod \"crc-storage-crc-pwtdx\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.374588 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.624057 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-pwtdx"] Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.858445 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6777338-b2ee-4112-8f06-ea26ba3b8183" path="/var/lib/kubelet/pods/b6777338-b2ee-4112-8f06-ea26ba3b8183/volumes" Jan 05 23:11:39 crc kubenswrapper[5034]: I0105 23:11:39.969821 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-pwtdx" event={"ID":"5f417f77-bf35-42f1-ba29-a7871ab9a715","Type":"ContainerStarted","Data":"3141c2eea856c4552aa95595630670ca9adc7123edaf86043fcbe3d934030c8e"} Jan 05 23:11:40 crc kubenswrapper[5034]: I0105 23:11:40.977801 5034 generic.go:334] "Generic (PLEG): container finished" podID="5f417f77-bf35-42f1-ba29-a7871ab9a715" containerID="987fa31a588c935ad9e2e1a5a0769d72077787cf5267d40455792130363e866c" exitCode=0 Jan 05 23:11:40 crc kubenswrapper[5034]: I0105 23:11:40.977906 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-pwtdx" event={"ID":"5f417f77-bf35-42f1-ba29-a7871ab9a715","Type":"ContainerDied","Data":"987fa31a588c935ad9e2e1a5a0769d72077787cf5267d40455792130363e866c"} Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.249803 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.427697 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5f417f77-bf35-42f1-ba29-a7871ab9a715-crc-storage\") pod \"5f417f77-bf35-42f1-ba29-a7871ab9a715\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.427837 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9gsm\" (UniqueName: \"kubernetes.io/projected/5f417f77-bf35-42f1-ba29-a7871ab9a715-kube-api-access-s9gsm\") pod \"5f417f77-bf35-42f1-ba29-a7871ab9a715\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.427994 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5f417f77-bf35-42f1-ba29-a7871ab9a715-node-mnt\") pod \"5f417f77-bf35-42f1-ba29-a7871ab9a715\" (UID: \"5f417f77-bf35-42f1-ba29-a7871ab9a715\") " Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.428050 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f417f77-bf35-42f1-ba29-a7871ab9a715-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "5f417f77-bf35-42f1-ba29-a7871ab9a715" (UID: "5f417f77-bf35-42f1-ba29-a7871ab9a715"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.428354 5034 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5f417f77-bf35-42f1-ba29-a7871ab9a715-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.433531 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f417f77-bf35-42f1-ba29-a7871ab9a715-kube-api-access-s9gsm" (OuterVolumeSpecName: "kube-api-access-s9gsm") pod "5f417f77-bf35-42f1-ba29-a7871ab9a715" (UID: "5f417f77-bf35-42f1-ba29-a7871ab9a715"). InnerVolumeSpecName "kube-api-access-s9gsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.447410 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f417f77-bf35-42f1-ba29-a7871ab9a715-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "5f417f77-bf35-42f1-ba29-a7871ab9a715" (UID: "5f417f77-bf35-42f1-ba29-a7871ab9a715"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.530295 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9gsm\" (UniqueName: \"kubernetes.io/projected/5f417f77-bf35-42f1-ba29-a7871ab9a715-kube-api-access-s9gsm\") on node \"crc\" DevicePath \"\"" Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.530347 5034 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5f417f77-bf35-42f1-ba29-a7871ab9a715-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.992977 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-pwtdx" event={"ID":"5f417f77-bf35-42f1-ba29-a7871ab9a715","Type":"ContainerDied","Data":"3141c2eea856c4552aa95595630670ca9adc7123edaf86043fcbe3d934030c8e"} Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.993023 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3141c2eea856c4552aa95595630670ca9adc7123edaf86043fcbe3d934030c8e" Jan 05 23:11:42 crc kubenswrapper[5034]: I0105 23:11:42.993098 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pwtdx" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.682814 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-pwtdx"] Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.691890 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-pwtdx"] Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.818476 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-6xmx5"] Jan 05 23:11:44 crc kubenswrapper[5034]: E0105 23:11:44.818824 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f417f77-bf35-42f1-ba29-a7871ab9a715" containerName="storage" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.818840 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f417f77-bf35-42f1-ba29-a7871ab9a715" containerName="storage" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.818965 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f417f77-bf35-42f1-ba29-a7871ab9a715" containerName="storage" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.819509 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.836783 5034 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-ck854" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.836769 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.836950 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.837466 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.857964 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-6xmx5"] Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.967646 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrbcs\" (UniqueName: \"kubernetes.io/projected/3ff2109f-360e-4d16-8224-c5145633e7cc-kube-api-access-zrbcs\") pod \"crc-storage-crc-6xmx5\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.967794 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ff2109f-360e-4d16-8224-c5145633e7cc-crc-storage\") pod \"crc-storage-crc-6xmx5\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:44 crc kubenswrapper[5034]: I0105 23:11:44.968212 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ff2109f-360e-4d16-8224-c5145633e7cc-node-mnt\") pod \"crc-storage-crc-6xmx5\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:45 crc kubenswrapper[5034]: I0105 23:11:45.070325 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrbcs\" (UniqueName: \"kubernetes.io/projected/3ff2109f-360e-4d16-8224-c5145633e7cc-kube-api-access-zrbcs\") pod \"crc-storage-crc-6xmx5\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:45 crc kubenswrapper[5034]: I0105 23:11:45.070470 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ff2109f-360e-4d16-8224-c5145633e7cc-crc-storage\") pod \"crc-storage-crc-6xmx5\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:45 crc kubenswrapper[5034]: I0105 23:11:45.070619 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ff2109f-360e-4d16-8224-c5145633e7cc-node-mnt\") pod \"crc-storage-crc-6xmx5\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:45 crc kubenswrapper[5034]: I0105 23:11:45.070956 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ff2109f-360e-4d16-8224-c5145633e7cc-node-mnt\") pod \"crc-storage-crc-6xmx5\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:45 crc kubenswrapper[5034]: I0105 23:11:45.071601 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ff2109f-360e-4d16-8224-c5145633e7cc-crc-storage\") pod \"crc-storage-crc-6xmx5\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:45 crc kubenswrapper[5034]: I0105 23:11:45.093725 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrbcs\" (UniqueName: \"kubernetes.io/projected/3ff2109f-360e-4d16-8224-c5145633e7cc-kube-api-access-zrbcs\") pod \"crc-storage-crc-6xmx5\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:45 crc kubenswrapper[5034]: I0105 23:11:45.160884 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:45 crc kubenswrapper[5034]: I0105 23:11:45.622955 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-6xmx5"] Jan 05 23:11:45 crc kubenswrapper[5034]: I0105 23:11:45.847167 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f417f77-bf35-42f1-ba29-a7871ab9a715" path="/var/lib/kubelet/pods/5f417f77-bf35-42f1-ba29-a7871ab9a715/volumes" Jan 05 23:11:46 crc kubenswrapper[5034]: I0105 23:11:46.016547 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6xmx5" event={"ID":"3ff2109f-360e-4d16-8224-c5145633e7cc","Type":"ContainerStarted","Data":"65413f3fb383c110d133cb30c789dccfe289808c922a3e3ea2607cdbed04f8d9"} Jan 05 23:11:47 crc kubenswrapper[5034]: I0105 23:11:47.031374 5034 generic.go:334] "Generic (PLEG): container finished" podID="3ff2109f-360e-4d16-8224-c5145633e7cc" containerID="1a1ec7b67d9f9edf99b412094c57e17c7af99f3549491e2ac5f52986b7238cdb" exitCode=0 Jan 05 23:11:47 crc kubenswrapper[5034]: I0105 23:11:47.031480 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6xmx5" event={"ID":"3ff2109f-360e-4d16-8224-c5145633e7cc","Type":"ContainerDied","Data":"1a1ec7b67d9f9edf99b412094c57e17c7af99f3549491e2ac5f52986b7238cdb"} Jan 05 23:11:48 crc kubenswrapper[5034]: I0105 23:11:48.336479 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:48 crc kubenswrapper[5034]: I0105 23:11:48.525713 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ff2109f-360e-4d16-8224-c5145633e7cc-crc-storage\") pod \"3ff2109f-360e-4d16-8224-c5145633e7cc\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " Jan 05 23:11:48 crc kubenswrapper[5034]: I0105 23:11:48.525896 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ff2109f-360e-4d16-8224-c5145633e7cc-node-mnt\") pod \"3ff2109f-360e-4d16-8224-c5145633e7cc\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " Jan 05 23:11:48 crc kubenswrapper[5034]: I0105 23:11:48.525980 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrbcs\" (UniqueName: \"kubernetes.io/projected/3ff2109f-360e-4d16-8224-c5145633e7cc-kube-api-access-zrbcs\") pod \"3ff2109f-360e-4d16-8224-c5145633e7cc\" (UID: \"3ff2109f-360e-4d16-8224-c5145633e7cc\") " Jan 05 23:11:48 crc kubenswrapper[5034]: I0105 23:11:48.526035 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ff2109f-360e-4d16-8224-c5145633e7cc-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "3ff2109f-360e-4d16-8224-c5145633e7cc" (UID: "3ff2109f-360e-4d16-8224-c5145633e7cc"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:11:48 crc kubenswrapper[5034]: I0105 23:11:48.526914 5034 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ff2109f-360e-4d16-8224-c5145633e7cc-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 05 23:11:48 crc kubenswrapper[5034]: I0105 23:11:48.532495 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff2109f-360e-4d16-8224-c5145633e7cc-kube-api-access-zrbcs" (OuterVolumeSpecName: "kube-api-access-zrbcs") pod "3ff2109f-360e-4d16-8224-c5145633e7cc" (UID: "3ff2109f-360e-4d16-8224-c5145633e7cc"). InnerVolumeSpecName "kube-api-access-zrbcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:11:48 crc kubenswrapper[5034]: I0105 23:11:48.544217 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff2109f-360e-4d16-8224-c5145633e7cc-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "3ff2109f-360e-4d16-8224-c5145633e7cc" (UID: "3ff2109f-360e-4d16-8224-c5145633e7cc"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:11:48 crc kubenswrapper[5034]: I0105 23:11:48.628982 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrbcs\" (UniqueName: \"kubernetes.io/projected/3ff2109f-360e-4d16-8224-c5145633e7cc-kube-api-access-zrbcs\") on node \"crc\" DevicePath \"\"" Jan 05 23:11:48 crc kubenswrapper[5034]: I0105 23:11:48.629037 5034 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ff2109f-360e-4d16-8224-c5145633e7cc-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 05 23:11:49 crc kubenswrapper[5034]: I0105 23:11:49.050917 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6xmx5" event={"ID":"3ff2109f-360e-4d16-8224-c5145633e7cc","Type":"ContainerDied","Data":"65413f3fb383c110d133cb30c789dccfe289808c922a3e3ea2607cdbed04f8d9"} Jan 05 23:11:49 crc kubenswrapper[5034]: I0105 23:11:49.051328 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65413f3fb383c110d133cb30c789dccfe289808c922a3e3ea2607cdbed04f8d9" Jan 05 23:11:49 crc kubenswrapper[5034]: I0105 23:11:49.050971 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6xmx5" Jan 05 23:11:58 crc kubenswrapper[5034]: I0105 23:11:58.190288 5034 scope.go:117] "RemoveContainer" containerID="aea9a008038f78ab0f46bcdb1a7b3d42296f27d32b9c0e567c8144ae697e1f97" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.524342 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6249c"] Jan 05 23:12:37 crc kubenswrapper[5034]: E0105 23:12:37.525357 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff2109f-360e-4d16-8224-c5145633e7cc" containerName="storage" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.525373 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff2109f-360e-4d16-8224-c5145633e7cc" containerName="storage" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.525551 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff2109f-360e-4d16-8224-c5145633e7cc" containerName="storage" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.526829 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.539827 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6249c"] Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.613012 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-utilities\") pod \"redhat-marketplace-6249c\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.613357 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwlx5\" (UniqueName: \"kubernetes.io/projected/f4be838f-d4a9-47bb-82fa-457415c25e57-kube-api-access-pwlx5\") pod \"redhat-marketplace-6249c\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.613449 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-catalog-content\") pod \"redhat-marketplace-6249c\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.714513 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwlx5\" (UniqueName: \"kubernetes.io/projected/f4be838f-d4a9-47bb-82fa-457415c25e57-kube-api-access-pwlx5\") pod \"redhat-marketplace-6249c\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.714632 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-catalog-content\") pod \"redhat-marketplace-6249c\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.714680 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-utilities\") pod \"redhat-marketplace-6249c\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.715274 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-utilities\") pod \"redhat-marketplace-6249c\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.715357 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-catalog-content\") pod \"redhat-marketplace-6249c\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.737896 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwlx5\" (UniqueName: \"kubernetes.io/projected/f4be838f-d4a9-47bb-82fa-457415c25e57-kube-api-access-pwlx5\") pod \"redhat-marketplace-6249c\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:37 crc kubenswrapper[5034]: I0105 23:12:37.862689 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:38 crc kubenswrapper[5034]: I0105 23:12:38.356480 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6249c"] Jan 05 23:12:38 crc kubenswrapper[5034]: I0105 23:12:38.435230 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6249c" event={"ID":"f4be838f-d4a9-47bb-82fa-457415c25e57","Type":"ContainerStarted","Data":"8279782ac9594442852269f7c2db68c9902c7a986e31c5250bb13ca52f84747f"} Jan 05 23:12:39 crc kubenswrapper[5034]: I0105 23:12:39.444344 5034 generic.go:334] "Generic (PLEG): container finished" podID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerID="ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1" exitCode=0 Jan 05 23:12:39 crc kubenswrapper[5034]: I0105 23:12:39.444430 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6249c" event={"ID":"f4be838f-d4a9-47bb-82fa-457415c25e57","Type":"ContainerDied","Data":"ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1"} Jan 05 23:12:40 crc kubenswrapper[5034]: I0105 23:12:40.454562 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6249c" event={"ID":"f4be838f-d4a9-47bb-82fa-457415c25e57","Type":"ContainerStarted","Data":"76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c"} Jan 05 23:12:41 crc kubenswrapper[5034]: I0105 23:12:41.468397 5034 generic.go:334] "Generic (PLEG): container finished" podID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerID="76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c" exitCode=0 Jan 05 23:12:41 crc kubenswrapper[5034]: I0105 23:12:41.468468 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6249c" event={"ID":"f4be838f-d4a9-47bb-82fa-457415c25e57","Type":"ContainerDied","Data":"76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c"} Jan 05 23:12:42 crc kubenswrapper[5034]: I0105 23:12:42.478819 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6249c" event={"ID":"f4be838f-d4a9-47bb-82fa-457415c25e57","Type":"ContainerStarted","Data":"6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35"} Jan 05 23:12:42 crc kubenswrapper[5034]: I0105 23:12:42.503948 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6249c" podStartSLOduration=3.050875336 podStartE2EDuration="5.503920611s" podCreationTimestamp="2026-01-05 23:12:37 +0000 UTC" firstStartedPulling="2026-01-05 23:12:39.445923095 +0000 UTC m=+4851.817922534" lastFinishedPulling="2026-01-05 23:12:41.89896833 +0000 UTC m=+4854.270967809" observedRunningTime="2026-01-05 23:12:42.497854269 +0000 UTC m=+4854.869853708" watchObservedRunningTime="2026-01-05 23:12:42.503920611 +0000 UTC m=+4854.875920050" Jan 05 23:12:47 crc kubenswrapper[5034]: I0105 23:12:47.863231 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:47 crc kubenswrapper[5034]: I0105 23:12:47.864357 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:47 crc kubenswrapper[5034]: I0105 23:12:47.936770 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:48 crc kubenswrapper[5034]: I0105 23:12:48.565831 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:48 crc kubenswrapper[5034]: I0105 23:12:48.621504 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6249c"] Jan 05 23:12:50 crc kubenswrapper[5034]: I0105 23:12:50.533821 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6249c" podUID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerName="registry-server" containerID="cri-o://6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35" gracePeriod=2 Jan 05 23:12:50 crc kubenswrapper[5034]: I0105 23:12:50.929479 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.031858 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-utilities\") pod \"f4be838f-d4a9-47bb-82fa-457415c25e57\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.031952 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-catalog-content\") pod \"f4be838f-d4a9-47bb-82fa-457415c25e57\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.032055 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwlx5\" (UniqueName: \"kubernetes.io/projected/f4be838f-d4a9-47bb-82fa-457415c25e57-kube-api-access-pwlx5\") pod \"f4be838f-d4a9-47bb-82fa-457415c25e57\" (UID: \"f4be838f-d4a9-47bb-82fa-457415c25e57\") " Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.032642 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-utilities" (OuterVolumeSpecName: "utilities") pod "f4be838f-d4a9-47bb-82fa-457415c25e57" (UID: "f4be838f-d4a9-47bb-82fa-457415c25e57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.045575 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4be838f-d4a9-47bb-82fa-457415c25e57-kube-api-access-pwlx5" (OuterVolumeSpecName: "kube-api-access-pwlx5") pod "f4be838f-d4a9-47bb-82fa-457415c25e57" (UID: "f4be838f-d4a9-47bb-82fa-457415c25e57"). InnerVolumeSpecName "kube-api-access-pwlx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.059623 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4be838f-d4a9-47bb-82fa-457415c25e57" (UID: "f4be838f-d4a9-47bb-82fa-457415c25e57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.133670 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.133737 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4be838f-d4a9-47bb-82fa-457415c25e57-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.133753 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwlx5\" (UniqueName: \"kubernetes.io/projected/f4be838f-d4a9-47bb-82fa-457415c25e57-kube-api-access-pwlx5\") on node \"crc\" DevicePath \"\"" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.552922 5034 generic.go:334] "Generic (PLEG): container finished" podID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerID="6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35" exitCode=0 Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.552994 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6249c" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.553051 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6249c" event={"ID":"f4be838f-d4a9-47bb-82fa-457415c25e57","Type":"ContainerDied","Data":"6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35"} Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.554371 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6249c" event={"ID":"f4be838f-d4a9-47bb-82fa-457415c25e57","Type":"ContainerDied","Data":"8279782ac9594442852269f7c2db68c9902c7a986e31c5250bb13ca52f84747f"} Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.554434 5034 scope.go:117] "RemoveContainer" containerID="6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.587762 5034 scope.go:117] "RemoveContainer" containerID="76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.601854 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6249c"] Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.622209 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6249c"] Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.631672 5034 scope.go:117] "RemoveContainer" containerID="ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.658171 5034 scope.go:117] "RemoveContainer" containerID="6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35" Jan 05 23:12:51 crc kubenswrapper[5034]: E0105 23:12:51.658902 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35\": container with ID starting with 6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35 not found: ID does not exist" containerID="6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.659046 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35"} err="failed to get container status \"6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35\": rpc error: code = NotFound desc = could not find container \"6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35\": container with ID starting with 6950131dd24154f17a2f2f617342c78f4eab8c6021fee484510253fb362a4e35 not found: ID does not exist" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.659217 5034 scope.go:117] "RemoveContainer" containerID="76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c" Jan 05 23:12:51 crc kubenswrapper[5034]: E0105 23:12:51.659819 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c\": container with ID starting with 76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c not found: ID does not exist" containerID="76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.659942 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c"} err="failed to get container status \"76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c\": rpc error: code = NotFound desc = could not find container \"76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c\": container with ID starting with 76a9668fade58d204411b3f154e9afab20604783575eb3068a29d58ea9c02d5c not found: ID does not exist" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.660042 5034 scope.go:117] "RemoveContainer" containerID="ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1" Jan 05 23:12:51 crc kubenswrapper[5034]: E0105 23:12:51.660505 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1\": container with ID starting with ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1 not found: ID does not exist" containerID="ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.660671 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1"} err="failed to get container status \"ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1\": rpc error: code = NotFound desc = could not find container \"ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1\": container with ID starting with ec2a52ce25f510054c729c894a52d6c53d2f4b30e55de8dfcbe4547a16aacbf1 not found: ID does not exist" Jan 05 23:12:51 crc kubenswrapper[5034]: I0105 23:12:51.854907 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4be838f-d4a9-47bb-82fa-457415c25e57" path="/var/lib/kubelet/pods/f4be838f-d4a9-47bb-82fa-457415c25e57/volumes" Jan 05 23:13:20 crc kubenswrapper[5034]: I0105 23:13:20.471886 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:13:20 crc kubenswrapper[5034]: I0105 23:13:20.472797 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.118352 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-cn9sw"] Jan 05 23:13:46 crc kubenswrapper[5034]: E0105 23:13:46.119976 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerName="extract-content" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.120000 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerName="extract-content" Jan 05 23:13:46 crc kubenswrapper[5034]: E0105 23:13:46.120045 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerName="registry-server" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.120054 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerName="registry-server" Jan 05 23:13:46 crc kubenswrapper[5034]: E0105 23:13:46.120089 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerName="extract-utilities" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.120100 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerName="extract-utilities" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.120361 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4be838f-d4a9-47bb-82fa-457415c25e57" containerName="registry-server" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.121559 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.123930 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.124042 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-nb8b9"] Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.124270 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.125065 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-w8wkq" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.127527 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.131055 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.131608 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.149429 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-nb8b9"] Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.174272 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-cn9sw"] Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.222688 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-config\") pod \"dnsmasq-dns-5986db9b4f-nb8b9\" (UID: \"1f131b93-08f3-49dd-bd95-2f11a53ca9b5\") " pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.222750 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dtfd\" (UniqueName: \"kubernetes.io/projected/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-kube-api-access-9dtfd\") pod \"dnsmasq-dns-5986db9b4f-nb8b9\" (UID: \"1f131b93-08f3-49dd-bd95-2f11a53ca9b5\") " pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.222774 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64nrs\" (UniqueName: \"kubernetes.io/projected/6a8f2b02-7361-4bf0-a515-8ffa7d183879-kube-api-access-64nrs\") pod \"dnsmasq-dns-56bbd59dc5-cn9sw\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.222805 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-config\") pod \"dnsmasq-dns-56bbd59dc5-cn9sw\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.222822 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-cn9sw\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.324545 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-config\") pod \"dnsmasq-dns-5986db9b4f-nb8b9\" (UID: \"1f131b93-08f3-49dd-bd95-2f11a53ca9b5\") " pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.324605 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dtfd\" (UniqueName: \"kubernetes.io/projected/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-kube-api-access-9dtfd\") pod \"dnsmasq-dns-5986db9b4f-nb8b9\" (UID: \"1f131b93-08f3-49dd-bd95-2f11a53ca9b5\") " pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.324627 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64nrs\" (UniqueName: \"kubernetes.io/projected/6a8f2b02-7361-4bf0-a515-8ffa7d183879-kube-api-access-64nrs\") pod \"dnsmasq-dns-56bbd59dc5-cn9sw\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.324656 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-config\") pod \"dnsmasq-dns-56bbd59dc5-cn9sw\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.324673 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-cn9sw\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.325792 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-cn9sw\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.325862 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-config\") pod \"dnsmasq-dns-5986db9b4f-nb8b9\" (UID: \"1f131b93-08f3-49dd-bd95-2f11a53ca9b5\") " pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.325875 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-config\") pod \"dnsmasq-dns-56bbd59dc5-cn9sw\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.348045 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dtfd\" (UniqueName: \"kubernetes.io/projected/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-kube-api-access-9dtfd\") pod \"dnsmasq-dns-5986db9b4f-nb8b9\" (UID: \"1f131b93-08f3-49dd-bd95-2f11a53ca9b5\") " pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.355139 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64nrs\" (UniqueName: \"kubernetes.io/projected/6a8f2b02-7361-4bf0-a515-8ffa7d183879-kube-api-access-64nrs\") pod \"dnsmasq-dns-56bbd59dc5-cn9sw\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.465922 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.479432 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.536798 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-cn9sw"] Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.570969 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-dtpqm"] Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.572567 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.590620 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-dtpqm"] Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.732679 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-config\") pod \"dnsmasq-dns-865d9b578f-dtpqm\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.733118 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-dns-svc\") pod \"dnsmasq-dns-865d9b578f-dtpqm\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.733146 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qls2\" (UniqueName: \"kubernetes.io/projected/8e0eb56f-8954-4ae6-a20c-52271fa70c91-kube-api-access-2qls2\") pod \"dnsmasq-dns-865d9b578f-dtpqm\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.834573 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-dns-svc\") pod \"dnsmasq-dns-865d9b578f-dtpqm\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.834638 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qls2\" (UniqueName: \"kubernetes.io/projected/8e0eb56f-8954-4ae6-a20c-52271fa70c91-kube-api-access-2qls2\") pod \"dnsmasq-dns-865d9b578f-dtpqm\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.834724 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-config\") pod \"dnsmasq-dns-865d9b578f-dtpqm\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.835606 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-config\") pod \"dnsmasq-dns-865d9b578f-dtpqm\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.835862 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-dns-svc\") pod \"dnsmasq-dns-865d9b578f-dtpqm\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.855609 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qls2\" (UniqueName: \"kubernetes.io/projected/8e0eb56f-8954-4ae6-a20c-52271fa70c91-kube-api-access-2qls2\") pod \"dnsmasq-dns-865d9b578f-dtpqm\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:46 crc kubenswrapper[5034]: I0105 23:13:46.913605 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.106596 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-nb8b9"] Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.126865 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-cn9sw"] Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.149121 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-lnhbn"] Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.153281 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.171116 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-lnhbn"] Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.217405 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-nb8b9"] Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.261866 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-dtpqm"] Jan 05 23:13:47 crc kubenswrapper[5034]: W0105 23:13:47.268977 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e0eb56f_8954_4ae6_a20c_52271fa70c91.slice/crio-5d1a29639225d518e9439e509641266951f7607b793d3284b2f671bdc8ca7bf4 WatchSource:0}: Error finding container 5d1a29639225d518e9439e509641266951f7607b793d3284b2f671bdc8ca7bf4: Status 404 returned error can't find the container with id 5d1a29639225d518e9439e509641266951f7607b793d3284b2f671bdc8ca7bf4 Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.348750 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-config\") pod \"dnsmasq-dns-5d79f765b5-lnhbn\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.348818 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-lnhbn\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.348900 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcwww\" (UniqueName: \"kubernetes.io/projected/cf778645-f825-4394-a3d2-84b640e6ade8-kube-api-access-tcwww\") pod \"dnsmasq-dns-5d79f765b5-lnhbn\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.450716 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-config\") pod \"dnsmasq-dns-5d79f765b5-lnhbn\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.450793 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-lnhbn\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.450893 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcwww\" (UniqueName: \"kubernetes.io/projected/cf778645-f825-4394-a3d2-84b640e6ade8-kube-api-access-tcwww\") pod \"dnsmasq-dns-5d79f765b5-lnhbn\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.452024 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-lnhbn\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.452065 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-config\") pod \"dnsmasq-dns-5d79f765b5-lnhbn\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.471519 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcwww\" (UniqueName: \"kubernetes.io/projected/cf778645-f825-4394-a3d2-84b640e6ade8-kube-api-access-tcwww\") pod \"dnsmasq-dns-5d79f765b5-lnhbn\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.490610 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.721372 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.727544 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.733371 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.733663 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-p8r7c" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.733672 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.733858 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.733912 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.733986 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.734718 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.743409 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858140 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858499 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858529 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858561 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8mj5\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-kube-api-access-n8mj5\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858583 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858610 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b2654eb-a501-4239-8429-8fe6029cf596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858642 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e52bf9f3-9166-46f6-ba25-809a4212cf11-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858670 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858717 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858742 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e52bf9f3-9166-46f6-ba25-809a4212cf11-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.858762 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.960669 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e52bf9f3-9166-46f6-ba25-809a4212cf11-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.960732 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.960785 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.960810 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e52bf9f3-9166-46f6-ba25-809a4212cf11-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.960837 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.960859 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.960886 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.960929 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.960974 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8mj5\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-kube-api-access-n8mj5\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.960999 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.961027 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3b2654eb-a501-4239-8429-8fe6029cf596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.961310 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.961594 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.964802 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.964860 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.965117 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.965123 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.965306 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.967637 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.967777 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3b2654eb-a501-4239-8429-8fe6029cf596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e267d96d81c3e25f1af4c1a5f3d3f0264a40789a12d5114cfd7f9ee2a683f073/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.968578 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e52bf9f3-9166-46f6-ba25-809a4212cf11-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.973162 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.973523 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.973976 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.974258 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.977060 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e52bf9f3-9166-46f6-ba25-809a4212cf11-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.977755 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.979352 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8mj5\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-kube-api-access-n8mj5\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.986904 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.996420 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-lnhbn"] Jan 05 23:13:47 crc kubenswrapper[5034]: I0105 23:13:47.999716 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3b2654eb-a501-4239-8429-8fe6029cf596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") pod \"rabbitmq-cell1-server-0\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.047928 5034 generic.go:334] "Generic (PLEG): container finished" podID="1f131b93-08f3-49dd-bd95-2f11a53ca9b5" containerID="96cdf965f6fe50e27ae122da8ca8d593aea7fd79954c41fac6bf55d7c6e361f0" exitCode=0 Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.048119 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" event={"ID":"1f131b93-08f3-49dd-bd95-2f11a53ca9b5","Type":"ContainerDied","Data":"96cdf965f6fe50e27ae122da8ca8d593aea7fd79954c41fac6bf55d7c6e361f0"} Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.048158 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" event={"ID":"1f131b93-08f3-49dd-bd95-2f11a53ca9b5","Type":"ContainerStarted","Data":"c3b628fe8bed374485203aabd5187283327ddfd009f27d9570e40c8c1fafcb56"} Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.060504 5034 generic.go:334] "Generic (PLEG): container finished" podID="6a8f2b02-7361-4bf0-a515-8ffa7d183879" containerID="ca0a26ed6d0f3a738670ce88fa7c029b754742f3cb7bd7ee60f3f0729c917bcb" exitCode=0 Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.060738 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" event={"ID":"6a8f2b02-7361-4bf0-a515-8ffa7d183879","Type":"ContainerDied","Data":"ca0a26ed6d0f3a738670ce88fa7c029b754742f3cb7bd7ee60f3f0729c917bcb"} Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.060772 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" event={"ID":"6a8f2b02-7361-4bf0-a515-8ffa7d183879","Type":"ContainerStarted","Data":"251e51aa535c9b472763c638e14042b08a8d9ede636fbe15cc300b13945ab648"} Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.063922 5034 generic.go:334] "Generic (PLEG): container finished" podID="8e0eb56f-8954-4ae6-a20c-52271fa70c91" containerID="37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166" exitCode=0 Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.064137 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" event={"ID":"8e0eb56f-8954-4ae6-a20c-52271fa70c91","Type":"ContainerDied","Data":"37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166"} Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.064289 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" event={"ID":"8e0eb56f-8954-4ae6-a20c-52271fa70c91","Type":"ContainerStarted","Data":"5d1a29639225d518e9439e509641266951f7607b793d3284b2f671bdc8ca7bf4"} Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.066692 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" event={"ID":"cf778645-f825-4394-a3d2-84b640e6ade8","Type":"ContainerStarted","Data":"163e31c8a708d47a73aad3974958f78856ac719f139f07039bf7466d566ab7d0"} Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.067839 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-p8r7c" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.070245 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.284642 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.286826 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.293265 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.293389 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.293461 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.293610 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.293811 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.294185 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kb9k6" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.295463 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 05 23:13:48 crc kubenswrapper[5034]: E0105 23:13:48.300677 5034 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 05 23:13:48 crc kubenswrapper[5034]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/8e0eb56f-8954-4ae6-a20c-52271fa70c91/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 05 23:13:48 crc kubenswrapper[5034]: > podSandboxID="5d1a29639225d518e9439e509641266951f7607b793d3284b2f671bdc8ca7bf4" Jan 05 23:13:48 crc kubenswrapper[5034]: E0105 23:13:48.301153 5034 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 05 23:13:48 crc kubenswrapper[5034]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb6hc5h68h68h594h659hdbh679h65ch5f6hdch6h5b9h8fh55hfhf8h57fhc7h56ch687h669h559h678h5dhc7hf7h697h5d6h9ch669h54fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qls2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-865d9b578f-dtpqm_openstack(8e0eb56f-8954-4ae6-a20c-52271fa70c91): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/8e0eb56f-8954-4ae6-a20c-52271fa70c91/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 05 23:13:48 crc kubenswrapper[5034]: > logger="UnhandledError" Jan 05 23:13:48 crc kubenswrapper[5034]: E0105 23:13:48.302394 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/8e0eb56f-8954-4ae6-a20c-52271fa70c91/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" podUID="8e0eb56f-8954-4ae6-a20c-52271fa70c91" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.302687 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.366547 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-config-data\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.366629 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.366673 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz56r\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-kube-api-access-kz56r\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.366699 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.366752 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.366815 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.366839 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.367054 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.367092 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.367119 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.367144 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468302 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468385 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz56r\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-kube-api-access-kz56r\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468448 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468474 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468549 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468590 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468630 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468674 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468693 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468717 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.468768 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-config-data\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.470193 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-config-data\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.472194 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.473067 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.473570 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.474236 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.477387 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.479860 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.482214 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.484285 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.484311 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.484328 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/95556719d6ad308ec85ffafeee385caa05042983aa8919e0046b617ec0decd55/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.493061 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz56r\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-kube-api-access-kz56r\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.523390 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") pod \"rabbitmq-server-0\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.548489 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.550941 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.556116 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.671988 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64nrs\" (UniqueName: \"kubernetes.io/projected/6a8f2b02-7361-4bf0-a515-8ffa7d183879-kube-api-access-64nrs\") pod \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.672117 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dtfd\" (UniqueName: \"kubernetes.io/projected/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-kube-api-access-9dtfd\") pod \"1f131b93-08f3-49dd-bd95-2f11a53ca9b5\" (UID: \"1f131b93-08f3-49dd-bd95-2f11a53ca9b5\") " Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.672205 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-config\") pod \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.672365 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-config\") pod \"1f131b93-08f3-49dd-bd95-2f11a53ca9b5\" (UID: \"1f131b93-08f3-49dd-bd95-2f11a53ca9b5\") " Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.672398 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-dns-svc\") pod \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\" (UID: \"6a8f2b02-7361-4bf0-a515-8ffa7d183879\") " Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.678757 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a8f2b02-7361-4bf0-a515-8ffa7d183879-kube-api-access-64nrs" (OuterVolumeSpecName: "kube-api-access-64nrs") pod "6a8f2b02-7361-4bf0-a515-8ffa7d183879" (UID: "6a8f2b02-7361-4bf0-a515-8ffa7d183879"). InnerVolumeSpecName "kube-api-access-64nrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.682704 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-kube-api-access-9dtfd" (OuterVolumeSpecName: "kube-api-access-9dtfd") pod "1f131b93-08f3-49dd-bd95-2f11a53ca9b5" (UID: "1f131b93-08f3-49dd-bd95-2f11a53ca9b5"). InnerVolumeSpecName "kube-api-access-9dtfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.687565 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.697204 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-config" (OuterVolumeSpecName: "config") pod "1f131b93-08f3-49dd-bd95-2f11a53ca9b5" (UID: "1f131b93-08f3-49dd-bd95-2f11a53ca9b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.702404 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a8f2b02-7361-4bf0-a515-8ffa7d183879" (UID: "6a8f2b02-7361-4bf0-a515-8ffa7d183879"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.705126 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-config" (OuterVolumeSpecName: "config") pod "6a8f2b02-7361-4bf0-a515-8ffa7d183879" (UID: "6a8f2b02-7361-4bf0-a515-8ffa7d183879"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.774598 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64nrs\" (UniqueName: \"kubernetes.io/projected/6a8f2b02-7361-4bf0-a515-8ffa7d183879-kube-api-access-64nrs\") on node \"crc\" DevicePath \"\"" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.774642 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dtfd\" (UniqueName: \"kubernetes.io/projected/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-kube-api-access-9dtfd\") on node \"crc\" DevicePath \"\"" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.774652 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.774663 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f131b93-08f3-49dd-bd95-2f11a53ca9b5-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.774674 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a8f2b02-7361-4bf0-a515-8ffa7d183879-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.800109 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 05 23:13:48 crc kubenswrapper[5034]: E0105 23:13:48.800522 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f131b93-08f3-49dd-bd95-2f11a53ca9b5" containerName="init" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.800544 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f131b93-08f3-49dd-bd95-2f11a53ca9b5" containerName="init" Jan 05 23:13:48 crc kubenswrapper[5034]: E0105 23:13:48.800580 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8f2b02-7361-4bf0-a515-8ffa7d183879" containerName="init" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.800589 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8f2b02-7361-4bf0-a515-8ffa7d183879" containerName="init" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.800750 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f131b93-08f3-49dd-bd95-2f11a53ca9b5" containerName="init" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.800766 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a8f2b02-7361-4bf0-a515-8ffa7d183879" containerName="init" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.801598 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.804184 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-fx5qj" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.804401 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.805029 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.811830 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.816541 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.829224 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.876569 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f5f475-a53a-467e-8c0f-365d09603cd0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.876648 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f5f475-a53a-467e-8c0f-365d09603cd0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.876691 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07f5f475-a53a-467e-8c0f-365d09603cd0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.876738 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07f5f475-a53a-467e-8c0f-365d09603cd0-kolla-config\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.876768 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smvnh\" (UniqueName: \"kubernetes.io/projected/07f5f475-a53a-467e-8c0f-365d09603cd0-kube-api-access-smvnh\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.876786 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f5f475-a53a-467e-8c0f-365d09603cd0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.876807 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07f5f475-a53a-467e-8c0f-365d09603cd0-config-data-default\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.876839 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6add36de-2778-4edc-97cd-31145d10f7a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add36de-2778-4edc-97cd-31145d10f7a7\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.978536 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smvnh\" (UniqueName: \"kubernetes.io/projected/07f5f475-a53a-467e-8c0f-365d09603cd0-kube-api-access-smvnh\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.979124 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f5f475-a53a-467e-8c0f-365d09603cd0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.979194 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07f5f475-a53a-467e-8c0f-365d09603cd0-config-data-default\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.979241 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6add36de-2778-4edc-97cd-31145d10f7a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add36de-2778-4edc-97cd-31145d10f7a7\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.979401 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f5f475-a53a-467e-8c0f-365d09603cd0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.979485 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f5f475-a53a-467e-8c0f-365d09603cd0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.979560 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07f5f475-a53a-467e-8c0f-365d09603cd0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.979675 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07f5f475-a53a-467e-8c0f-365d09603cd0-kolla-config\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.981827 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07f5f475-a53a-467e-8c0f-365d09603cd0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.982331 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f5f475-a53a-467e-8c0f-365d09603cd0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.982678 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07f5f475-a53a-467e-8c0f-365d09603cd0-config-data-default\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.982928 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07f5f475-a53a-467e-8c0f-365d09603cd0-kolla-config\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.985915 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.985977 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6add36de-2778-4edc-97cd-31145d10f7a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add36de-2778-4edc-97cd-31145d10f7a7\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8fdf423fd341d4fa5ba53600576223b0273f6a429226aa507bec29e51da35d7a/globalmount\"" pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.988098 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f5f475-a53a-467e-8c0f-365d09603cd0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.988625 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 23:13:48 crc kubenswrapper[5034]: I0105 23:13:48.989244 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f5f475-a53a-467e-8c0f-365d09603cd0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:48 crc kubenswrapper[5034]: W0105 23:13:48.990434 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50bf4e9b_bfcf_4add_bd8a_79ec64de6a1c.slice/crio-7b764c756ab5451cc1d5f946a320fe4ad48f0da9addaa0840a3266fe7523693f WatchSource:0}: Error finding container 7b764c756ab5451cc1d5f946a320fe4ad48f0da9addaa0840a3266fe7523693f: Status 404 returned error can't find the container with id 7b764c756ab5451cc1d5f946a320fe4ad48f0da9addaa0840a3266fe7523693f Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:48.999018 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smvnh\" (UniqueName: \"kubernetes.io/projected/07f5f475-a53a-467e-8c0f-365d09603cd0-kube-api-access-smvnh\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.021043 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6add36de-2778-4edc-97cd-31145d10f7a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6add36de-2778-4edc-97cd-31145d10f7a7\") pod \"openstack-galera-0\" (UID: \"07f5f475-a53a-467e-8c0f-365d09603cd0\") " pod="openstack/openstack-galera-0" Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.078690 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e52bf9f3-9166-46f6-ba25-809a4212cf11","Type":"ContainerStarted","Data":"86e9a9270f22250629bc87523b8036a32702210fab046c697c818670ef23b1db"} Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.080900 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c","Type":"ContainerStarted","Data":"7b764c756ab5451cc1d5f946a320fe4ad48f0da9addaa0840a3266fe7523693f"} Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.082737 5034 generic.go:334] "Generic (PLEG): container finished" podID="cf778645-f825-4394-a3d2-84b640e6ade8" containerID="3f9260572a0dabc63a9c80f88bb9358c0a611671a33ba8160d57df0d8e07696f" exitCode=0 Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.082797 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" event={"ID":"cf778645-f825-4394-a3d2-84b640e6ade8","Type":"ContainerDied","Data":"3f9260572a0dabc63a9c80f88bb9358c0a611671a33ba8160d57df0d8e07696f"} Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.088096 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" event={"ID":"1f131b93-08f3-49dd-bd95-2f11a53ca9b5","Type":"ContainerDied","Data":"c3b628fe8bed374485203aabd5187283327ddfd009f27d9570e40c8c1fafcb56"} Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.088122 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-nb8b9" Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.088147 5034 scope.go:117] "RemoveContainer" containerID="96cdf965f6fe50e27ae122da8ca8d593aea7fd79954c41fac6bf55d7c6e361f0" Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.093721 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.093748 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-cn9sw" event={"ID":"6a8f2b02-7361-4bf0-a515-8ffa7d183879","Type":"ContainerDied","Data":"251e51aa535c9b472763c638e14042b08a8d9ede636fbe15cc300b13945ab648"} Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.126425 5034 scope.go:117] "RemoveContainer" containerID="ca0a26ed6d0f3a738670ce88fa7c029b754742f3cb7bd7ee60f3f0729c917bcb" Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.136263 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.196902 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-cn9sw"] Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.204529 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-cn9sw"] Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.230095 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-nb8b9"] Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.236438 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-nb8b9"] Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.932373 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f131b93-08f3-49dd-bd95-2f11a53ca9b5" path="/var/lib/kubelet/pods/1f131b93-08f3-49dd-bd95-2f11a53ca9b5/volumes" Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.934167 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a8f2b02-7361-4bf0-a515-8ffa7d183879" path="/var/lib/kubelet/pods/6a8f2b02-7361-4bf0-a515-8ffa7d183879/volumes" Jan 05 23:13:49 crc kubenswrapper[5034]: I0105 23:13:49.972244 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 23:13:49 crc kubenswrapper[5034]: W0105 23:13:49.977182 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07f5f475_a53a_467e_8c0f_365d09603cd0.slice/crio-db754c46a707554b7ad4e90b201f4a8b66302a2846662b6366047f000a55cc97 WatchSource:0}: Error finding container db754c46a707554b7ad4e90b201f4a8b66302a2846662b6366047f000a55cc97: Status 404 returned error can't find the container with id db754c46a707554b7ad4e90b201f4a8b66302a2846662b6366047f000a55cc97 Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.111166 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" event={"ID":"8e0eb56f-8954-4ae6-a20c-52271fa70c91","Type":"ContainerStarted","Data":"80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155"} Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.111868 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.114553 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"07f5f475-a53a-467e-8c0f-365d09603cd0","Type":"ContainerStarted","Data":"db754c46a707554b7ad4e90b201f4a8b66302a2846662b6366047f000a55cc97"} Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.119609 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" event={"ID":"cf778645-f825-4394-a3d2-84b640e6ade8","Type":"ContainerStarted","Data":"1be2f0af685533a4b1aae84e9a7620559fe3b964cf1919e526d4822702d03b6f"} Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.119889 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.126744 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e52bf9f3-9166-46f6-ba25-809a4212cf11","Type":"ContainerStarted","Data":"d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc"} Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.139168 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" podStartSLOduration=4.139127911 podStartE2EDuration="4.139127911s" podCreationTimestamp="2026-01-05 23:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:13:50.135576151 +0000 UTC m=+4922.507575590" watchObservedRunningTime="2026-01-05 23:13:50.139127911 +0000 UTC m=+4922.511127350" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.154661 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" podStartSLOduration=3.154626781 podStartE2EDuration="3.154626781s" podCreationTimestamp="2026-01-05 23:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:13:50.15246421 +0000 UTC m=+4922.524463659" watchObservedRunningTime="2026-01-05 23:13:50.154626781 +0000 UTC m=+4922.526626220" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.469597 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.469692 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.488544 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.490118 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.492923 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.492968 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.493259 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8tlkb" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.493478 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.509535 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.623644 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1d0345a-603d-46a0-832f-94e63db6d310-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.623740 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1d0345a-603d-46a0-832f-94e63db6d310-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.623805 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1d0345a-603d-46a0-832f-94e63db6d310-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.623870 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d0345a-603d-46a0-832f-94e63db6d310-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.623920 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb1f260f-ab7b-4504-8570-aa2b49c67901\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb1f260f-ab7b-4504-8570-aa2b49c67901\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.624109 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1d0345a-603d-46a0-832f-94e63db6d310-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.624138 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1d0345a-603d-46a0-832f-94e63db6d310-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.624175 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rfdp\" (UniqueName: \"kubernetes.io/projected/e1d0345a-603d-46a0-832f-94e63db6d310-kube-api-access-8rfdp\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.725267 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb1f260f-ab7b-4504-8570-aa2b49c67901\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb1f260f-ab7b-4504-8570-aa2b49c67901\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.725412 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1d0345a-603d-46a0-832f-94e63db6d310-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.725437 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1d0345a-603d-46a0-832f-94e63db6d310-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.725469 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfdp\" (UniqueName: \"kubernetes.io/projected/e1d0345a-603d-46a0-832f-94e63db6d310-kube-api-access-8rfdp\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.725511 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1d0345a-603d-46a0-832f-94e63db6d310-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.725538 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1d0345a-603d-46a0-832f-94e63db6d310-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.725573 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1d0345a-603d-46a0-832f-94e63db6d310-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.725594 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d0345a-603d-46a0-832f-94e63db6d310-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.726438 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e1d0345a-603d-46a0-832f-94e63db6d310-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.727443 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e1d0345a-603d-46a0-832f-94e63db6d310-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.727519 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e1d0345a-603d-46a0-832f-94e63db6d310-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.728463 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1d0345a-603d-46a0-832f-94e63db6d310-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.728562 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.728596 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb1f260f-ab7b-4504-8570-aa2b49c67901\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb1f260f-ab7b-4504-8570-aa2b49c67901\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/56c60dc0bfb3b0cf5496c6850563de655d8b5b901950534dc3ce12975eb87f1e/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.732356 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1d0345a-603d-46a0-832f-94e63db6d310-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.737757 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d0345a-603d-46a0-832f-94e63db6d310-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.744764 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfdp\" (UniqueName: \"kubernetes.io/projected/e1d0345a-603d-46a0-832f-94e63db6d310-kube-api-access-8rfdp\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.758843 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb1f260f-ab7b-4504-8570-aa2b49c67901\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb1f260f-ab7b-4504-8570-aa2b49c67901\") pod \"openstack-cell1-galera-0\" (UID: \"e1d0345a-603d-46a0-832f-94e63db6d310\") " pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.809581 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.877185 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.878935 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.894048 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dq8nm" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.894366 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.895071 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 05 23:13:50 crc kubenswrapper[5034]: I0105 23:13:50.917759 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.031427 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vnwh\" (UniqueName: \"kubernetes.io/projected/6f858aaf-b558-44a2-ab96-dc3372e35537-kube-api-access-4vnwh\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.031500 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f858aaf-b558-44a2-ab96-dc3372e35537-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.031814 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f858aaf-b558-44a2-ab96-dc3372e35537-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.031905 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f858aaf-b558-44a2-ab96-dc3372e35537-config-data\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.031935 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6f858aaf-b558-44a2-ab96-dc3372e35537-kolla-config\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.133329 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vnwh\" (UniqueName: \"kubernetes.io/projected/6f858aaf-b558-44a2-ab96-dc3372e35537-kube-api-access-4vnwh\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.133409 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f858aaf-b558-44a2-ab96-dc3372e35537-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.133476 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f858aaf-b558-44a2-ab96-dc3372e35537-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.133511 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f858aaf-b558-44a2-ab96-dc3372e35537-config-data\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.133539 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6f858aaf-b558-44a2-ab96-dc3372e35537-kolla-config\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.134841 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6f858aaf-b558-44a2-ab96-dc3372e35537-kolla-config\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.135009 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f858aaf-b558-44a2-ab96-dc3372e35537-config-data\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.138641 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"07f5f475-a53a-467e-8c0f-365d09603cd0","Type":"ContainerStarted","Data":"2de04779aaa9cd0c2c2508a60f434a1f6c32fe1b95fd7cbf2ff57e32bce33215"} Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.139798 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f858aaf-b558-44a2-ab96-dc3372e35537-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.139820 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f858aaf-b558-44a2-ab96-dc3372e35537-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.142610 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c","Type":"ContainerStarted","Data":"f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647"} Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.154107 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vnwh\" (UniqueName: \"kubernetes.io/projected/6f858aaf-b558-44a2-ab96-dc3372e35537-kube-api-access-4vnwh\") pod \"memcached-0\" (UID: \"6f858aaf-b558-44a2-ab96-dc3372e35537\") " pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.226265 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.301094 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 23:13:51 crc kubenswrapper[5034]: W0105 23:13:51.307056 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1d0345a_603d_46a0_832f_94e63db6d310.slice/crio-cb9dc2e4f16c06f6334c74899dbf9c6c98e404791dcae059592463f7ded838f0 WatchSource:0}: Error finding container cb9dc2e4f16c06f6334c74899dbf9c6c98e404791dcae059592463f7ded838f0: Status 404 returned error can't find the container with id cb9dc2e4f16c06f6334c74899dbf9c6c98e404791dcae059592463f7ded838f0 Jan 05 23:13:51 crc kubenswrapper[5034]: I0105 23:13:51.690161 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 05 23:13:52 crc kubenswrapper[5034]: I0105 23:13:52.155370 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6f858aaf-b558-44a2-ab96-dc3372e35537","Type":"ContainerStarted","Data":"a053bf4707a40a877ea75d6135e7625763f3b3ab22e2d137506ebc808dd7d57b"} Jan 05 23:13:52 crc kubenswrapper[5034]: I0105 23:13:52.155492 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6f858aaf-b558-44a2-ab96-dc3372e35537","Type":"ContainerStarted","Data":"c1a9318b4d47e177096cb655bffdffd05fef9a40159bd06bf531b9f8931ac68d"} Jan 05 23:13:52 crc kubenswrapper[5034]: I0105 23:13:52.155531 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 05 23:13:52 crc kubenswrapper[5034]: I0105 23:13:52.161104 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e1d0345a-603d-46a0-832f-94e63db6d310","Type":"ContainerStarted","Data":"b179129cd7c28690e20b0f75d05aff8dacd2ccd23c047b499b1c33a1b926338d"} Jan 05 23:13:52 crc kubenswrapper[5034]: I0105 23:13:52.161860 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e1d0345a-603d-46a0-832f-94e63db6d310","Type":"ContainerStarted","Data":"cb9dc2e4f16c06f6334c74899dbf9c6c98e404791dcae059592463f7ded838f0"} Jan 05 23:13:52 crc kubenswrapper[5034]: I0105 23:13:52.181842 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.181815426 podStartE2EDuration="2.181815426s" podCreationTimestamp="2026-01-05 23:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:13:52.171241866 +0000 UTC m=+4924.543241315" watchObservedRunningTime="2026-01-05 23:13:52.181815426 +0000 UTC m=+4924.553814865" Jan 05 23:13:54 crc kubenswrapper[5034]: I0105 23:13:54.176286 5034 generic.go:334] "Generic (PLEG): container finished" podID="07f5f475-a53a-467e-8c0f-365d09603cd0" containerID="2de04779aaa9cd0c2c2508a60f434a1f6c32fe1b95fd7cbf2ff57e32bce33215" exitCode=0 Jan 05 23:13:54 crc kubenswrapper[5034]: I0105 23:13:54.176336 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"07f5f475-a53a-467e-8c0f-365d09603cd0","Type":"ContainerDied","Data":"2de04779aaa9cd0c2c2508a60f434a1f6c32fe1b95fd7cbf2ff57e32bce33215"} Jan 05 23:13:55 crc kubenswrapper[5034]: I0105 23:13:55.190414 5034 generic.go:334] "Generic (PLEG): container finished" podID="e1d0345a-603d-46a0-832f-94e63db6d310" containerID="b179129cd7c28690e20b0f75d05aff8dacd2ccd23c047b499b1c33a1b926338d" exitCode=0 Jan 05 23:13:55 crc kubenswrapper[5034]: I0105 23:13:55.190562 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e1d0345a-603d-46a0-832f-94e63db6d310","Type":"ContainerDied","Data":"b179129cd7c28690e20b0f75d05aff8dacd2ccd23c047b499b1c33a1b926338d"} Jan 05 23:13:55 crc kubenswrapper[5034]: I0105 23:13:55.194042 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"07f5f475-a53a-467e-8c0f-365d09603cd0","Type":"ContainerStarted","Data":"c2f3cc023abc440f1c326f9bf952f1f68082cbc9813ce9bf4368ed81a1b1e732"} Jan 05 23:13:55 crc kubenswrapper[5034]: I0105 23:13:55.255241 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.255181848 podStartE2EDuration="8.255181848s" podCreationTimestamp="2026-01-05 23:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:13:55.246964924 +0000 UTC m=+4927.618964403" watchObservedRunningTime="2026-01-05 23:13:55.255181848 +0000 UTC m=+4927.627181337" Jan 05 23:13:56 crc kubenswrapper[5034]: I0105 23:13:56.207454 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e1d0345a-603d-46a0-832f-94e63db6d310","Type":"ContainerStarted","Data":"77d0085e0444e29a0eddc66ec0124e33d83a7a6427ab1e57a7de9c75d9e24ce1"} Jan 05 23:13:56 crc kubenswrapper[5034]: I0105 23:13:56.228862 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 05 23:13:56 crc kubenswrapper[5034]: I0105 23:13:56.235736 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.235718322 podStartE2EDuration="7.235718322s" podCreationTimestamp="2026-01-05 23:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:13:56.228899168 +0000 UTC m=+4928.600898607" watchObservedRunningTime="2026-01-05 23:13:56.235718322 +0000 UTC m=+4928.607717761" Jan 05 23:13:56 crc kubenswrapper[5034]: I0105 23:13:56.916091 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:57 crc kubenswrapper[5034]: I0105 23:13:57.492655 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:13:57 crc kubenswrapper[5034]: I0105 23:13:57.552263 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-dtpqm"] Jan 05 23:13:57 crc kubenswrapper[5034]: I0105 23:13:57.552579 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" podUID="8e0eb56f-8954-4ae6-a20c-52271fa70c91" containerName="dnsmasq-dns" containerID="cri-o://80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155" gracePeriod=10 Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.009913 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.158472 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qls2\" (UniqueName: \"kubernetes.io/projected/8e0eb56f-8954-4ae6-a20c-52271fa70c91-kube-api-access-2qls2\") pod \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.158522 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-config\") pod \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.158783 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-dns-svc\") pod \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\" (UID: \"8e0eb56f-8954-4ae6-a20c-52271fa70c91\") " Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.170459 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e0eb56f-8954-4ae6-a20c-52271fa70c91-kube-api-access-2qls2" (OuterVolumeSpecName: "kube-api-access-2qls2") pod "8e0eb56f-8954-4ae6-a20c-52271fa70c91" (UID: "8e0eb56f-8954-4ae6-a20c-52271fa70c91"). InnerVolumeSpecName "kube-api-access-2qls2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.198855 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-config" (OuterVolumeSpecName: "config") pod "8e0eb56f-8954-4ae6-a20c-52271fa70c91" (UID: "8e0eb56f-8954-4ae6-a20c-52271fa70c91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.200726 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e0eb56f-8954-4ae6-a20c-52271fa70c91" (UID: "8e0eb56f-8954-4ae6-a20c-52271fa70c91"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.227681 5034 generic.go:334] "Generic (PLEG): container finished" podID="8e0eb56f-8954-4ae6-a20c-52271fa70c91" containerID="80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155" exitCode=0 Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.227739 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" event={"ID":"8e0eb56f-8954-4ae6-a20c-52271fa70c91","Type":"ContainerDied","Data":"80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155"} Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.227825 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" event={"ID":"8e0eb56f-8954-4ae6-a20c-52271fa70c91","Type":"ContainerDied","Data":"5d1a29639225d518e9439e509641266951f7607b793d3284b2f671bdc8ca7bf4"} Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.227846 5034 scope.go:117] "RemoveContainer" containerID="80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.227876 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-dtpqm" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.248770 5034 scope.go:117] "RemoveContainer" containerID="37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.262261 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.262332 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qls2\" (UniqueName: \"kubernetes.io/projected/8e0eb56f-8954-4ae6-a20c-52271fa70c91-kube-api-access-2qls2\") on node \"crc\" DevicePath \"\"" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.262360 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e0eb56f-8954-4ae6-a20c-52271fa70c91-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.272726 5034 scope.go:117] "RemoveContainer" containerID="80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155" Jan 05 23:13:58 crc kubenswrapper[5034]: E0105 23:13:58.279751 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155\": container with ID starting with 80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155 not found: ID does not exist" containerID="80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.279857 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155"} err="failed to get container status \"80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155\": rpc error: code = NotFound desc = could not find container \"80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155\": container with ID starting with 80dc638846777bbb9d5da9312e777e3173e6b320de4b4bdcd6b2c59eca9ae155 not found: ID does not exist" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.280589 5034 scope.go:117] "RemoveContainer" containerID="37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166" Jan 05 23:13:58 crc kubenswrapper[5034]: E0105 23:13:58.281886 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166\": container with ID starting with 37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166 not found: ID does not exist" containerID="37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.281961 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166"} err="failed to get container status \"37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166\": rpc error: code = NotFound desc = could not find container \"37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166\": container with ID starting with 37b16763bb8798912b00ca18d20a9e0d9cdfb6e926324b2767a81e4373060166 not found: ID does not exist" Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.289279 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-dtpqm"] Jan 05 23:13:58 crc kubenswrapper[5034]: I0105 23:13:58.296546 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-dtpqm"] Jan 05 23:13:59 crc kubenswrapper[5034]: I0105 23:13:59.137516 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 05 23:13:59 crc kubenswrapper[5034]: I0105 23:13:59.138070 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 05 23:13:59 crc kubenswrapper[5034]: I0105 23:13:59.849296 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e0eb56f-8954-4ae6-a20c-52271fa70c91" path="/var/lib/kubelet/pods/8e0eb56f-8954-4ae6-a20c-52271fa70c91/volumes" Jan 05 23:14:00 crc kubenswrapper[5034]: I0105 23:14:00.810356 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 05 23:14:00 crc kubenswrapper[5034]: I0105 23:14:00.810419 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 05 23:14:01 crc kubenswrapper[5034]: I0105 23:14:01.433618 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 05 23:14:01 crc kubenswrapper[5034]: I0105 23:14:01.546509 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 05 23:14:03 crc kubenswrapper[5034]: I0105 23:14:03.109375 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 05 23:14:03 crc kubenswrapper[5034]: I0105 23:14:03.186233 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.783610 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vvs4t"] Jan 05 23:14:07 crc kubenswrapper[5034]: E0105 23:14:07.784934 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0eb56f-8954-4ae6-a20c-52271fa70c91" containerName="init" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.784950 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0eb56f-8954-4ae6-a20c-52271fa70c91" containerName="init" Jan 05 23:14:07 crc kubenswrapper[5034]: E0105 23:14:07.784968 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0eb56f-8954-4ae6-a20c-52271fa70c91" containerName="dnsmasq-dns" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.784974 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0eb56f-8954-4ae6-a20c-52271fa70c91" containerName="dnsmasq-dns" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.785150 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e0eb56f-8954-4ae6-a20c-52271fa70c91" containerName="dnsmasq-dns" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.785820 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vvs4t" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.789528 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.796778 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vvs4t"] Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.828389 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-operator-scripts\") pod \"root-account-create-update-vvs4t\" (UID: \"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176\") " pod="openstack/root-account-create-update-vvs4t" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.828516 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfvmx\" (UniqueName: \"kubernetes.io/projected/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-kube-api-access-pfvmx\") pod \"root-account-create-update-vvs4t\" (UID: \"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176\") " pod="openstack/root-account-create-update-vvs4t" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.930357 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-operator-scripts\") pod \"root-account-create-update-vvs4t\" (UID: \"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176\") " pod="openstack/root-account-create-update-vvs4t" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.930739 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfvmx\" (UniqueName: \"kubernetes.io/projected/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-kube-api-access-pfvmx\") pod \"root-account-create-update-vvs4t\" (UID: \"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176\") " pod="openstack/root-account-create-update-vvs4t" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.932263 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-operator-scripts\") pod \"root-account-create-update-vvs4t\" (UID: \"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176\") " pod="openstack/root-account-create-update-vvs4t" Jan 05 23:14:07 crc kubenswrapper[5034]: I0105 23:14:07.955233 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfvmx\" (UniqueName: \"kubernetes.io/projected/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-kube-api-access-pfvmx\") pod \"root-account-create-update-vvs4t\" (UID: \"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176\") " pod="openstack/root-account-create-update-vvs4t" Jan 05 23:14:08 crc kubenswrapper[5034]: I0105 23:14:08.139487 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vvs4t" Jan 05 23:14:08 crc kubenswrapper[5034]: I0105 23:14:08.561474 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vvs4t"] Jan 05 23:14:09 crc kubenswrapper[5034]: I0105 23:14:09.370350 5034 generic.go:334] "Generic (PLEG): container finished" podID="2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176" containerID="bce11df8a6e6a9bf23d13a961707218f5ecaa5c2880b96f8ad80b1aa14b1502a" exitCode=0 Jan 05 23:14:09 crc kubenswrapper[5034]: I0105 23:14:09.370448 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vvs4t" event={"ID":"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176","Type":"ContainerDied","Data":"bce11df8a6e6a9bf23d13a961707218f5ecaa5c2880b96f8ad80b1aa14b1502a"} Jan 05 23:14:09 crc kubenswrapper[5034]: I0105 23:14:09.370660 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vvs4t" event={"ID":"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176","Type":"ContainerStarted","Data":"27abe5a895fd1054966d1492aa779d6a0dbac439cf86adc7251e1b93c8e23eeb"} Jan 05 23:14:10 crc kubenswrapper[5034]: I0105 23:14:10.741981 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vvs4t" Jan 05 23:14:10 crc kubenswrapper[5034]: I0105 23:14:10.778753 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-operator-scripts\") pod \"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176\" (UID: \"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176\") " Jan 05 23:14:10 crc kubenswrapper[5034]: I0105 23:14:10.778830 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfvmx\" (UniqueName: \"kubernetes.io/projected/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-kube-api-access-pfvmx\") pod \"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176\" (UID: \"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176\") " Jan 05 23:14:10 crc kubenswrapper[5034]: I0105 23:14:10.779933 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176" (UID: "2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:14:10 crc kubenswrapper[5034]: I0105 23:14:10.785873 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-kube-api-access-pfvmx" (OuterVolumeSpecName: "kube-api-access-pfvmx") pod "2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176" (UID: "2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176"). InnerVolumeSpecName "kube-api-access-pfvmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:14:10 crc kubenswrapper[5034]: I0105 23:14:10.880822 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:10 crc kubenswrapper[5034]: I0105 23:14:10.880858 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfvmx\" (UniqueName: \"kubernetes.io/projected/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176-kube-api-access-pfvmx\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:11 crc kubenswrapper[5034]: I0105 23:14:11.394607 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vvs4t" event={"ID":"2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176","Type":"ContainerDied","Data":"27abe5a895fd1054966d1492aa779d6a0dbac439cf86adc7251e1b93c8e23eeb"} Jan 05 23:14:11 crc kubenswrapper[5034]: I0105 23:14:11.394677 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27abe5a895fd1054966d1492aa779d6a0dbac439cf86adc7251e1b93c8e23eeb" Jan 05 23:14:11 crc kubenswrapper[5034]: I0105 23:14:11.394694 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vvs4t" Jan 05 23:14:14 crc kubenswrapper[5034]: I0105 23:14:14.439861 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vvs4t"] Jan 05 23:14:14 crc kubenswrapper[5034]: I0105 23:14:14.451603 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vvs4t"] Jan 05 23:14:15 crc kubenswrapper[5034]: I0105 23:14:15.848536 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176" path="/var/lib/kubelet/pods/2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176/volumes" Jan 05 23:14:17 crc kubenswrapper[5034]: I0105 23:14:17.961193 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9fp4d"] Jan 05 23:14:17 crc kubenswrapper[5034]: E0105 23:14:17.961812 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176" containerName="mariadb-account-create-update" Jan 05 23:14:17 crc kubenswrapper[5034]: I0105 23:14:17.961830 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176" containerName="mariadb-account-create-update" Jan 05 23:14:17 crc kubenswrapper[5034]: I0105 23:14:17.962045 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3bbf77-50e0-4a9f-9fe9-2aefbe2ba176" containerName="mariadb-account-create-update" Jan 05 23:14:17 crc kubenswrapper[5034]: I0105 23:14:17.963390 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9fp4d"] Jan 05 23:14:17 crc kubenswrapper[5034]: I0105 23:14:17.963510 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.119213 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-utilities\") pod \"redhat-operators-9fp4d\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.119538 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-catalog-content\") pod \"redhat-operators-9fp4d\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.119568 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vkxh\" (UniqueName: \"kubernetes.io/projected/39ef77a5-33e6-4e5b-aeb0-628972e99f28-kube-api-access-9vkxh\") pod \"redhat-operators-9fp4d\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.221057 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-utilities\") pod \"redhat-operators-9fp4d\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.221163 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-catalog-content\") pod \"redhat-operators-9fp4d\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.221195 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vkxh\" (UniqueName: \"kubernetes.io/projected/39ef77a5-33e6-4e5b-aeb0-628972e99f28-kube-api-access-9vkxh\") pod \"redhat-operators-9fp4d\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.222581 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-utilities\") pod \"redhat-operators-9fp4d\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.222783 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-catalog-content\") pod \"redhat-operators-9fp4d\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.255491 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vkxh\" (UniqueName: \"kubernetes.io/projected/39ef77a5-33e6-4e5b-aeb0-628972e99f28-kube-api-access-9vkxh\") pod \"redhat-operators-9fp4d\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.285597 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:18 crc kubenswrapper[5034]: I0105 23:14:18.734019 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9fp4d"] Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.447830 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qhmmf"] Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.449621 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qhmmf" Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.453316 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.467470 5034 generic.go:334] "Generic (PLEG): container finished" podID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerID="e42b4466f3e5091707de8f95fa0581815ead7ca322bfebe8effa9e7921473169" exitCode=0 Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.467566 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fp4d" event={"ID":"39ef77a5-33e6-4e5b-aeb0-628972e99f28","Type":"ContainerDied","Data":"e42b4466f3e5091707de8f95fa0581815ead7ca322bfebe8effa9e7921473169"} Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.467677 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fp4d" event={"ID":"39ef77a5-33e6-4e5b-aeb0-628972e99f28","Type":"ContainerStarted","Data":"1f3779c7a853bfb97dcc2e50a4f3cab4e4a87dbf92d121671e2979f927053902"} Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.471179 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qhmmf"] Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.542332 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wflxf\" (UniqueName: \"kubernetes.io/projected/4c796c8e-6659-486b-8393-4d934e907b28-kube-api-access-wflxf\") pod \"root-account-create-update-qhmmf\" (UID: \"4c796c8e-6659-486b-8393-4d934e907b28\") " pod="openstack/root-account-create-update-qhmmf" Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.542391 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c796c8e-6659-486b-8393-4d934e907b28-operator-scripts\") pod \"root-account-create-update-qhmmf\" (UID: \"4c796c8e-6659-486b-8393-4d934e907b28\") " pod="openstack/root-account-create-update-qhmmf" Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.643867 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wflxf\" (UniqueName: \"kubernetes.io/projected/4c796c8e-6659-486b-8393-4d934e907b28-kube-api-access-wflxf\") pod \"root-account-create-update-qhmmf\" (UID: \"4c796c8e-6659-486b-8393-4d934e907b28\") " pod="openstack/root-account-create-update-qhmmf" Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.643933 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c796c8e-6659-486b-8393-4d934e907b28-operator-scripts\") pod \"root-account-create-update-qhmmf\" (UID: \"4c796c8e-6659-486b-8393-4d934e907b28\") " pod="openstack/root-account-create-update-qhmmf" Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.645210 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c796c8e-6659-486b-8393-4d934e907b28-operator-scripts\") pod \"root-account-create-update-qhmmf\" (UID: \"4c796c8e-6659-486b-8393-4d934e907b28\") " pod="openstack/root-account-create-update-qhmmf" Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.664399 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wflxf\" (UniqueName: \"kubernetes.io/projected/4c796c8e-6659-486b-8393-4d934e907b28-kube-api-access-wflxf\") pod \"root-account-create-update-qhmmf\" (UID: \"4c796c8e-6659-486b-8393-4d934e907b28\") " pod="openstack/root-account-create-update-qhmmf" Jan 05 23:14:19 crc kubenswrapper[5034]: I0105 23:14:19.779329 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qhmmf" Jan 05 23:14:20 crc kubenswrapper[5034]: I0105 23:14:20.224604 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qhmmf"] Jan 05 23:14:20 crc kubenswrapper[5034]: I0105 23:14:20.468792 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:14:20 crc kubenswrapper[5034]: I0105 23:14:20.469477 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:14:20 crc kubenswrapper[5034]: I0105 23:14:20.469564 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 23:14:20 crc kubenswrapper[5034]: I0105 23:14:20.470849 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 23:14:20 crc kubenswrapper[5034]: I0105 23:14:20.470945 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" gracePeriod=600 Jan 05 23:14:20 crc kubenswrapper[5034]: I0105 23:14:20.480621 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fp4d" event={"ID":"39ef77a5-33e6-4e5b-aeb0-628972e99f28","Type":"ContainerStarted","Data":"577c5b808eca37873bbd9130a98f6cd2e709d188b06a2d12a5e54295bdd78386"} Jan 05 23:14:20 crc kubenswrapper[5034]: I0105 23:14:20.486501 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qhmmf" event={"ID":"4c796c8e-6659-486b-8393-4d934e907b28","Type":"ContainerStarted","Data":"d7d838d5144683cfdc034bd83586142c7200ae080f71b2460d00cc6a7b5ef9af"} Jan 05 23:14:20 crc kubenswrapper[5034]: I0105 23:14:20.486563 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qhmmf" event={"ID":"4c796c8e-6659-486b-8393-4d934e907b28","Type":"ContainerStarted","Data":"3acb766c73058d4429ce6ccb0c7bb3722d9ea1e29fb386058ee614d126efd994"} Jan 05 23:14:20 crc kubenswrapper[5034]: I0105 23:14:20.529918 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qhmmf" podStartSLOduration=1.5298864399999998 podStartE2EDuration="1.52988644s" podCreationTimestamp="2026-01-05 23:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:14:20.52741625 +0000 UTC m=+4952.899415689" watchObservedRunningTime="2026-01-05 23:14:20.52988644 +0000 UTC m=+4952.901885879" Jan 05 23:14:20 crc kubenswrapper[5034]: E0105 23:14:20.591266 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:14:21 crc kubenswrapper[5034]: I0105 23:14:21.499848 5034 generic.go:334] "Generic (PLEG): container finished" podID="4c796c8e-6659-486b-8393-4d934e907b28" containerID="d7d838d5144683cfdc034bd83586142c7200ae080f71b2460d00cc6a7b5ef9af" exitCode=0 Jan 05 23:14:21 crc kubenswrapper[5034]: I0105 23:14:21.499918 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qhmmf" event={"ID":"4c796c8e-6659-486b-8393-4d934e907b28","Type":"ContainerDied","Data":"d7d838d5144683cfdc034bd83586142c7200ae080f71b2460d00cc6a7b5ef9af"} Jan 05 23:14:21 crc kubenswrapper[5034]: I0105 23:14:21.505688 5034 generic.go:334] "Generic (PLEG): container finished" podID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerID="577c5b808eca37873bbd9130a98f6cd2e709d188b06a2d12a5e54295bdd78386" exitCode=0 Jan 05 23:14:21 crc kubenswrapper[5034]: I0105 23:14:21.505789 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fp4d" event={"ID":"39ef77a5-33e6-4e5b-aeb0-628972e99f28","Type":"ContainerDied","Data":"577c5b808eca37873bbd9130a98f6cd2e709d188b06a2d12a5e54295bdd78386"} Jan 05 23:14:21 crc kubenswrapper[5034]: I0105 23:14:21.508935 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" exitCode=0 Jan 05 23:14:21 crc kubenswrapper[5034]: I0105 23:14:21.508992 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f"} Jan 05 23:14:21 crc kubenswrapper[5034]: I0105 23:14:21.509056 5034 scope.go:117] "RemoveContainer" containerID="6958d2db4f5b2fc15030cb3b9a7b3e4850057b9ff8407ab2940c23e91d80bc16" Jan 05 23:14:21 crc kubenswrapper[5034]: I0105 23:14:21.511844 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:14:21 crc kubenswrapper[5034]: E0105 23:14:21.512200 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.521395 5034 generic.go:334] "Generic (PLEG): container finished" podID="e52bf9f3-9166-46f6-ba25-809a4212cf11" containerID="d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc" exitCode=0 Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.521478 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e52bf9f3-9166-46f6-ba25-809a4212cf11","Type":"ContainerDied","Data":"d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc"} Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.527240 5034 generic.go:334] "Generic (PLEG): container finished" podID="50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" containerID="f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647" exitCode=0 Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.527328 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c","Type":"ContainerDied","Data":"f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647"} Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.533222 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fp4d" event={"ID":"39ef77a5-33e6-4e5b-aeb0-628972e99f28","Type":"ContainerStarted","Data":"8c9b3f490bba565437be29f03be9b700b75260ed0feb604400bb74a9f865e8ea"} Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.655008 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9fp4d" podStartSLOduration=3.121474036 podStartE2EDuration="5.654984154s" podCreationTimestamp="2026-01-05 23:14:17 +0000 UTC" firstStartedPulling="2026-01-05 23:14:19.468875082 +0000 UTC m=+4951.840874521" lastFinishedPulling="2026-01-05 23:14:22.00238516 +0000 UTC m=+4954.374384639" observedRunningTime="2026-01-05 23:14:22.640806351 +0000 UTC m=+4955.012805790" watchObservedRunningTime="2026-01-05 23:14:22.654984154 +0000 UTC m=+4955.026983593" Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.852670 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qhmmf" Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.911411 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c796c8e-6659-486b-8393-4d934e907b28-operator-scripts\") pod \"4c796c8e-6659-486b-8393-4d934e907b28\" (UID: \"4c796c8e-6659-486b-8393-4d934e907b28\") " Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.911487 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wflxf\" (UniqueName: \"kubernetes.io/projected/4c796c8e-6659-486b-8393-4d934e907b28-kube-api-access-wflxf\") pod \"4c796c8e-6659-486b-8393-4d934e907b28\" (UID: \"4c796c8e-6659-486b-8393-4d934e907b28\") " Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.913357 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c796c8e-6659-486b-8393-4d934e907b28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c796c8e-6659-486b-8393-4d934e907b28" (UID: "4c796c8e-6659-486b-8393-4d934e907b28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:14:22 crc kubenswrapper[5034]: I0105 23:14:22.921422 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c796c8e-6659-486b-8393-4d934e907b28-kube-api-access-wflxf" (OuterVolumeSpecName: "kube-api-access-wflxf") pod "4c796c8e-6659-486b-8393-4d934e907b28" (UID: "4c796c8e-6659-486b-8393-4d934e907b28"). InnerVolumeSpecName "kube-api-access-wflxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:14:23 crc kubenswrapper[5034]: I0105 23:14:23.013960 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c796c8e-6659-486b-8393-4d934e907b28-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:23 crc kubenswrapper[5034]: I0105 23:14:23.014008 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wflxf\" (UniqueName: \"kubernetes.io/projected/4c796c8e-6659-486b-8393-4d934e907b28-kube-api-access-wflxf\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:23 crc kubenswrapper[5034]: I0105 23:14:23.549216 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c","Type":"ContainerStarted","Data":"b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294"} Jan 05 23:14:23 crc kubenswrapper[5034]: I0105 23:14:23.549505 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 05 23:14:23 crc kubenswrapper[5034]: I0105 23:14:23.551028 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qhmmf" event={"ID":"4c796c8e-6659-486b-8393-4d934e907b28","Type":"ContainerDied","Data":"3acb766c73058d4429ce6ccb0c7bb3722d9ea1e29fb386058ee614d126efd994"} Jan 05 23:14:23 crc kubenswrapper[5034]: I0105 23:14:23.551059 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3acb766c73058d4429ce6ccb0c7bb3722d9ea1e29fb386058ee614d126efd994" Jan 05 23:14:23 crc kubenswrapper[5034]: I0105 23:14:23.551102 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qhmmf" Jan 05 23:14:23 crc kubenswrapper[5034]: I0105 23:14:23.553552 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e52bf9f3-9166-46f6-ba25-809a4212cf11","Type":"ContainerStarted","Data":"b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32"} Jan 05 23:14:23 crc kubenswrapper[5034]: I0105 23:14:23.583865 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.583831641 podStartE2EDuration="36.583831641s" podCreationTimestamp="2026-01-05 23:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:14:23.57218816 +0000 UTC m=+4955.944187599" watchObservedRunningTime="2026-01-05 23:14:23.583831641 +0000 UTC m=+4955.955831090" Jan 05 23:14:23 crc kubenswrapper[5034]: I0105 23:14:23.612647 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.612617258 podStartE2EDuration="37.612617258s" podCreationTimestamp="2026-01-05 23:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:14:23.605315461 +0000 UTC m=+4955.977314900" watchObservedRunningTime="2026-01-05 23:14:23.612617258 +0000 UTC m=+4955.984616697" Jan 05 23:14:28 crc kubenswrapper[5034]: I0105 23:14:28.071011 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:28 crc kubenswrapper[5034]: I0105 23:14:28.286438 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:28 crc kubenswrapper[5034]: I0105 23:14:28.287641 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:28 crc kubenswrapper[5034]: I0105 23:14:28.335365 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:28 crc kubenswrapper[5034]: I0105 23:14:28.644207 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:31 crc kubenswrapper[5034]: I0105 23:14:31.777586 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9fp4d"] Jan 05 23:14:31 crc kubenswrapper[5034]: I0105 23:14:31.778264 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9fp4d" podUID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerName="registry-server" containerID="cri-o://8c9b3f490bba565437be29f03be9b700b75260ed0feb604400bb74a9f865e8ea" gracePeriod=2 Jan 05 23:14:32 crc kubenswrapper[5034]: I0105 23:14:32.641214 5034 generic.go:334] "Generic (PLEG): container finished" podID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerID="8c9b3f490bba565437be29f03be9b700b75260ed0feb604400bb74a9f865e8ea" exitCode=0 Jan 05 23:14:32 crc kubenswrapper[5034]: I0105 23:14:32.641287 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fp4d" event={"ID":"39ef77a5-33e6-4e5b-aeb0-628972e99f28","Type":"ContainerDied","Data":"8c9b3f490bba565437be29f03be9b700b75260ed0feb604400bb74a9f865e8ea"} Jan 05 23:14:32 crc kubenswrapper[5034]: I0105 23:14:32.839189 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:14:32 crc kubenswrapper[5034]: E0105 23:14:32.839794 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.241051 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.305855 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-catalog-content\") pod \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.305999 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-utilities\") pod \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.306026 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vkxh\" (UniqueName: \"kubernetes.io/projected/39ef77a5-33e6-4e5b-aeb0-628972e99f28-kube-api-access-9vkxh\") pod \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\" (UID: \"39ef77a5-33e6-4e5b-aeb0-628972e99f28\") " Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.307252 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-utilities" (OuterVolumeSpecName: "utilities") pod "39ef77a5-33e6-4e5b-aeb0-628972e99f28" (UID: "39ef77a5-33e6-4e5b-aeb0-628972e99f28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.319896 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ef77a5-33e6-4e5b-aeb0-628972e99f28-kube-api-access-9vkxh" (OuterVolumeSpecName: "kube-api-access-9vkxh") pod "39ef77a5-33e6-4e5b-aeb0-628972e99f28" (UID: "39ef77a5-33e6-4e5b-aeb0-628972e99f28"). InnerVolumeSpecName "kube-api-access-9vkxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.408549 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.408590 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vkxh\" (UniqueName: \"kubernetes.io/projected/39ef77a5-33e6-4e5b-aeb0-628972e99f28-kube-api-access-9vkxh\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.426685 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39ef77a5-33e6-4e5b-aeb0-628972e99f28" (UID: "39ef77a5-33e6-4e5b-aeb0-628972e99f28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.510387 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ef77a5-33e6-4e5b-aeb0-628972e99f28-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.652635 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fp4d" event={"ID":"39ef77a5-33e6-4e5b-aeb0-628972e99f28","Type":"ContainerDied","Data":"1f3779c7a853bfb97dcc2e50a4f3cab4e4a87dbf92d121671e2979f927053902"} Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.652700 5034 scope.go:117] "RemoveContainer" containerID="8c9b3f490bba565437be29f03be9b700b75260ed0feb604400bb74a9f865e8ea" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.652700 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fp4d" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.683379 5034 scope.go:117] "RemoveContainer" containerID="577c5b808eca37873bbd9130a98f6cd2e709d188b06a2d12a5e54295bdd78386" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.692491 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9fp4d"] Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.704930 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9fp4d"] Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.709259 5034 scope.go:117] "RemoveContainer" containerID="e42b4466f3e5091707de8f95fa0581815ead7ca322bfebe8effa9e7921473169" Jan 05 23:14:33 crc kubenswrapper[5034]: I0105 23:14:33.848820 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" path="/var/lib/kubelet/pods/39ef77a5-33e6-4e5b-aeb0-628972e99f28/volumes" Jan 05 23:14:38 crc kubenswrapper[5034]: I0105 23:14:38.073342 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:38 crc kubenswrapper[5034]: I0105 23:14:38.552304 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.848782 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-fb9jd"] Jan 05 23:14:43 crc kubenswrapper[5034]: E0105 23:14:43.849678 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerName="registry-server" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.849696 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerName="registry-server" Jan 05 23:14:43 crc kubenswrapper[5034]: E0105 23:14:43.849712 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerName="extract-content" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.849719 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerName="extract-content" Jan 05 23:14:43 crc kubenswrapper[5034]: E0105 23:14:43.849734 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerName="extract-utilities" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.849744 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerName="extract-utilities" Jan 05 23:14:43 crc kubenswrapper[5034]: E0105 23:14:43.849769 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c796c8e-6659-486b-8393-4d934e907b28" containerName="mariadb-account-create-update" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.849777 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c796c8e-6659-486b-8393-4d934e907b28" containerName="mariadb-account-create-update" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.849953 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c796c8e-6659-486b-8393-4d934e907b28" containerName="mariadb-account-create-update" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.849983 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ef77a5-33e6-4e5b-aeb0-628972e99f28" containerName="registry-server" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.851142 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.851973 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-fb9jd"] Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.980064 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-config\") pod \"dnsmasq-dns-699964fbc-fb9jd\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.980156 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbptg\" (UniqueName: \"kubernetes.io/projected/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-kube-api-access-pbptg\") pod \"dnsmasq-dns-699964fbc-fb9jd\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:43 crc kubenswrapper[5034]: I0105 23:14:43.980208 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-dns-svc\") pod \"dnsmasq-dns-699964fbc-fb9jd\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:44 crc kubenswrapper[5034]: I0105 23:14:44.081634 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-config\") pod \"dnsmasq-dns-699964fbc-fb9jd\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:44 crc kubenswrapper[5034]: I0105 23:14:44.081738 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbptg\" (UniqueName: \"kubernetes.io/projected/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-kube-api-access-pbptg\") pod \"dnsmasq-dns-699964fbc-fb9jd\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:44 crc kubenswrapper[5034]: I0105 23:14:44.081781 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-dns-svc\") pod \"dnsmasq-dns-699964fbc-fb9jd\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:44 crc kubenswrapper[5034]: I0105 23:14:44.083239 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-config\") pod \"dnsmasq-dns-699964fbc-fb9jd\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:44 crc kubenswrapper[5034]: I0105 23:14:44.084769 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-dns-svc\") pod \"dnsmasq-dns-699964fbc-fb9jd\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:44 crc kubenswrapper[5034]: I0105 23:14:44.102924 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbptg\" (UniqueName: \"kubernetes.io/projected/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-kube-api-access-pbptg\") pod \"dnsmasq-dns-699964fbc-fb9jd\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:44 crc kubenswrapper[5034]: I0105 23:14:44.173050 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:44 crc kubenswrapper[5034]: I0105 23:14:44.654582 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 23:14:44 crc kubenswrapper[5034]: I0105 23:14:44.677963 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-fb9jd"] Jan 05 23:14:44 crc kubenswrapper[5034]: W0105 23:14:44.685827 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c8c7cf0_6bc1_4a94_97a4_a60bcec994e8.slice/crio-f202225cfef07365a1453510435cd5b103de27e779542a824a0c4bf592150b03 WatchSource:0}: Error finding container f202225cfef07365a1453510435cd5b103de27e779542a824a0c4bf592150b03: Status 404 returned error can't find the container with id f202225cfef07365a1453510435cd5b103de27e779542a824a0c4bf592150b03 Jan 05 23:14:44 crc kubenswrapper[5034]: I0105 23:14:44.746786 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" event={"ID":"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8","Type":"ContainerStarted","Data":"f202225cfef07365a1453510435cd5b103de27e779542a824a0c4bf592150b03"} Jan 05 23:14:45 crc kubenswrapper[5034]: I0105 23:14:45.419640 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 23:14:45 crc kubenswrapper[5034]: I0105 23:14:45.756414 5034 generic.go:334] "Generic (PLEG): container finished" podID="8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" containerID="84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72" exitCode=0 Jan 05 23:14:45 crc kubenswrapper[5034]: I0105 23:14:45.756468 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" event={"ID":"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8","Type":"ContainerDied","Data":"84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72"} Jan 05 23:14:46 crc kubenswrapper[5034]: I0105 23:14:46.784743 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" event={"ID":"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8","Type":"ContainerStarted","Data":"1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845"} Jan 05 23:14:46 crc kubenswrapper[5034]: I0105 23:14:46.785091 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:46 crc kubenswrapper[5034]: I0105 23:14:46.811317 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" podStartSLOduration=3.811293868 podStartE2EDuration="3.811293868s" podCreationTimestamp="2026-01-05 23:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:14:46.807615343 +0000 UTC m=+4979.179614782" watchObservedRunningTime="2026-01-05 23:14:46.811293868 +0000 UTC m=+4979.183293317" Jan 05 23:14:47 crc kubenswrapper[5034]: I0105 23:14:47.843536 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:14:47 crc kubenswrapper[5034]: E0105 23:14:47.844076 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:14:49 crc kubenswrapper[5034]: I0105 23:14:49.009206 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" containerName="rabbitmq" containerID="cri-o://b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294" gracePeriod=604796 Jan 05 23:14:49 crc kubenswrapper[5034]: I0105 23:14:49.832729 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e52bf9f3-9166-46f6-ba25-809a4212cf11" containerName="rabbitmq" containerID="cri-o://b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32" gracePeriod=604796 Jan 05 23:14:54 crc kubenswrapper[5034]: I0105 23:14:54.175113 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:14:54 crc kubenswrapper[5034]: I0105 23:14:54.267706 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-lnhbn"] Jan 05 23:14:54 crc kubenswrapper[5034]: I0105 23:14:54.268130 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" podUID="cf778645-f825-4394-a3d2-84b640e6ade8" containerName="dnsmasq-dns" containerID="cri-o://1be2f0af685533a4b1aae84e9a7620559fe3b964cf1919e526d4822702d03b6f" gracePeriod=10 Jan 05 23:14:54 crc kubenswrapper[5034]: I0105 23:14:54.844420 5034 generic.go:334] "Generic (PLEG): container finished" podID="cf778645-f825-4394-a3d2-84b640e6ade8" containerID="1be2f0af685533a4b1aae84e9a7620559fe3b964cf1919e526d4822702d03b6f" exitCode=0 Jan 05 23:14:54 crc kubenswrapper[5034]: I0105 23:14:54.844506 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" event={"ID":"cf778645-f825-4394-a3d2-84b640e6ade8","Type":"ContainerDied","Data":"1be2f0af685533a4b1aae84e9a7620559fe3b964cf1919e526d4822702d03b6f"} Jan 05 23:14:54 crc kubenswrapper[5034]: I0105 23:14:54.844861 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" event={"ID":"cf778645-f825-4394-a3d2-84b640e6ade8","Type":"ContainerDied","Data":"163e31c8a708d47a73aad3974958f78856ac719f139f07039bf7466d566ab7d0"} Jan 05 23:14:54 crc kubenswrapper[5034]: I0105 23:14:54.844883 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="163e31c8a708d47a73aad3974958f78856ac719f139f07039bf7466d566ab7d0" Jan 05 23:14:54 crc kubenswrapper[5034]: I0105 23:14:54.885640 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.079544 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-dns-svc\") pod \"cf778645-f825-4394-a3d2-84b640e6ade8\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.079834 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcwww\" (UniqueName: \"kubernetes.io/projected/cf778645-f825-4394-a3d2-84b640e6ade8-kube-api-access-tcwww\") pod \"cf778645-f825-4394-a3d2-84b640e6ade8\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.079899 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-config\") pod \"cf778645-f825-4394-a3d2-84b640e6ade8\" (UID: \"cf778645-f825-4394-a3d2-84b640e6ade8\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.089055 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf778645-f825-4394-a3d2-84b640e6ade8-kube-api-access-tcwww" (OuterVolumeSpecName: "kube-api-access-tcwww") pod "cf778645-f825-4394-a3d2-84b640e6ade8" (UID: "cf778645-f825-4394-a3d2-84b640e6ade8"). InnerVolumeSpecName "kube-api-access-tcwww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.123676 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf778645-f825-4394-a3d2-84b640e6ade8" (UID: "cf778645-f825-4394-a3d2-84b640e6ade8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.147145 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-config" (OuterVolumeSpecName: "config") pod "cf778645-f825-4394-a3d2-84b640e6ade8" (UID: "cf778645-f825-4394-a3d2-84b640e6ade8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.182243 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.182281 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf778645-f825-4394-a3d2-84b640e6ade8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.182292 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcwww\" (UniqueName: \"kubernetes.io/projected/cf778645-f825-4394-a3d2-84b640e6ade8-kube-api-access-tcwww\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.556984 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.589368 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-pod-info\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.589436 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-plugins\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.590157 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.596424 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-pod-info" (OuterVolumeSpecName: "pod-info") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.690831 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-erlang-cookie\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691051 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691150 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-config-data\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691186 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz56r\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-kube-api-access-kz56r\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691234 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-plugins-conf\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691261 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-confd\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691347 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-tls\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691392 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-server-conf\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691418 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-erlang-cookie-secret\") pod \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\" (UID: \"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c\") " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691802 5034 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691819 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.691831 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.692687 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.696444 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.696601 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-kube-api-access-kz56r" (OuterVolumeSpecName: "kube-api-access-kz56r") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "kube-api-access-kz56r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.705006 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.708069 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa" (OuterVolumeSpecName: "persistence") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.711525 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-config-data" (OuterVolumeSpecName: "config-data") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.748875 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-server-conf" (OuterVolumeSpecName: "server-conf") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.775673 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" (UID: "50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.793871 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.793914 5034 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.793924 5034 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.793939 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.793987 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") on node \"crc\" " Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.794000 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.794011 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz56r\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-kube-api-access-kz56r\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.794022 5034 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.794031 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.809096 5034 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.809396 5034 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa") on node "crc" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.863528 5034 generic.go:334] "Generic (PLEG): container finished" podID="50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" containerID="b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294" exitCode=0 Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.863750 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-lnhbn" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.864064 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.864098 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c","Type":"ContainerDied","Data":"b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294"} Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.864301 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c","Type":"ContainerDied","Data":"7b764c756ab5451cc1d5f946a320fe4ad48f0da9addaa0840a3266fe7523693f"} Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.864445 5034 scope.go:117] "RemoveContainer" containerID="b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.887441 5034 scope.go:117] "RemoveContainer" containerID="f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.903045 5034 reconciler_common.go:293] "Volume detached for volume \"pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.910489 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.927665 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.937742 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-lnhbn"] Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.940828 5034 scope.go:117] "RemoveContainer" containerID="b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294" Jan 05 23:14:55 crc kubenswrapper[5034]: E0105 23:14:55.942292 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294\": container with ID starting with b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294 not found: ID does not exist" containerID="b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.942357 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294"} err="failed to get container status \"b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294\": rpc error: code = NotFound desc = could not find container \"b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294\": container with ID starting with b39035823e06bc0f2ef79d5a9df05302e1c47f127d81d4d6191c8d202334b294 not found: ID does not exist" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.942398 5034 scope.go:117] "RemoveContainer" containerID="f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647" Jan 05 23:14:55 crc kubenswrapper[5034]: E0105 23:14:55.942836 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647\": container with ID starting with f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647 not found: ID does not exist" containerID="f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.943021 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647"} err="failed to get container status \"f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647\": rpc error: code = NotFound desc = could not find container \"f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647\": container with ID starting with f354c3fe1a395ce4478078ff5a9927c89ef0b046b4e23eb04995b6b77166f647 not found: ID does not exist" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.957782 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 23:14:55 crc kubenswrapper[5034]: E0105 23:14:55.958473 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf778645-f825-4394-a3d2-84b640e6ade8" containerName="init" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.958577 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf778645-f825-4394-a3d2-84b640e6ade8" containerName="init" Jan 05 23:14:55 crc kubenswrapper[5034]: E0105 23:14:55.958665 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf778645-f825-4394-a3d2-84b640e6ade8" containerName="dnsmasq-dns" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.958722 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf778645-f825-4394-a3d2-84b640e6ade8" containerName="dnsmasq-dns" Jan 05 23:14:55 crc kubenswrapper[5034]: E0105 23:14:55.958808 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" containerName="rabbitmq" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.958877 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" containerName="rabbitmq" Jan 05 23:14:55 crc kubenswrapper[5034]: E0105 23:14:55.959559 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" containerName="setup-container" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.959643 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" containerName="setup-container" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.959858 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf778645-f825-4394-a3d2-84b640e6ade8" containerName="dnsmasq-dns" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.959947 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" containerName="rabbitmq" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.961163 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.966190 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.966198 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.966874 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.967188 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kb9k6" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.967319 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.967525 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.967576 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.985962 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-lnhbn"] Jan 05 23:14:55 crc kubenswrapper[5034]: I0105 23:14:55.995619 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.107429 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d566j\" (UniqueName: \"kubernetes.io/projected/908c96c4-673a-4ee7-a399-cca966e2281b-kube-api-access-d566j\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.108370 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/908c96c4-673a-4ee7-a399-cca966e2281b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.108531 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/908c96c4-673a-4ee7-a399-cca966e2281b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.108649 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.108783 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/908c96c4-673a-4ee7-a399-cca966e2281b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.108919 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/908c96c4-673a-4ee7-a399-cca966e2281b-config-data\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.109123 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.109277 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.109335 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.109392 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.109442 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/908c96c4-673a-4ee7-a399-cca966e2281b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.211691 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/908c96c4-673a-4ee7-a399-cca966e2281b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.211750 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/908c96c4-673a-4ee7-a399-cca966e2281b-config-data\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.211797 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.211848 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.211877 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.211908 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.211938 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/908c96c4-673a-4ee7-a399-cca966e2281b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.212000 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d566j\" (UniqueName: \"kubernetes.io/projected/908c96c4-673a-4ee7-a399-cca966e2281b-kube-api-access-d566j\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.212027 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/908c96c4-673a-4ee7-a399-cca966e2281b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.212065 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/908c96c4-673a-4ee7-a399-cca966e2281b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.212111 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.212910 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.213223 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/908c96c4-673a-4ee7-a399-cca966e2281b-config-data\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.213897 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/908c96c4-673a-4ee7-a399-cca966e2281b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.214806 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/908c96c4-673a-4ee7-a399-cca966e2281b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.215024 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.216508 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/908c96c4-673a-4ee7-a399-cca966e2281b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.217719 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/908c96c4-673a-4ee7-a399-cca966e2281b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.217795 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.217835 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/95556719d6ad308ec85ffafeee385caa05042983aa8919e0046b617ec0decd55/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.218436 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.219814 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/908c96c4-673a-4ee7-a399-cca966e2281b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.234740 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d566j\" (UniqueName: \"kubernetes.io/projected/908c96c4-673a-4ee7-a399-cca966e2281b-kube-api-access-d566j\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.267434 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cc46673-e207-4fbd-9d7a-4fb26090f0fa\") pod \"rabbitmq-server-0\" (UID: \"908c96c4-673a-4ee7-a399-cca966e2281b\") " pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.342873 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.350899 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.432426 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8mj5\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-kube-api-access-n8mj5\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.432472 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-config-data\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.432499 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e52bf9f3-9166-46f6-ba25-809a4212cf11-erlang-cookie-secret\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.432517 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-confd\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.432544 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-plugins-conf\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.432569 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-tls\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.435773 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.436594 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52bf9f3-9166-46f6-ba25-809a4212cf11-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.438071 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.450459 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-kube-api-access-n8mj5" (OuterVolumeSpecName: "kube-api-access-n8mj5") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "kube-api-access-n8mj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.468648 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-config-data" (OuterVolumeSpecName: "config-data") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.533868 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-erlang-cookie\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.533929 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-server-conf\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.533964 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e52bf9f3-9166-46f6-ba25-809a4212cf11-pod-info\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.534013 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-plugins\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.534205 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") pod \"e52bf9f3-9166-46f6-ba25-809a4212cf11\" (UID: \"e52bf9f3-9166-46f6-ba25-809a4212cf11\") " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.534550 5034 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e52bf9f3-9166-46f6-ba25-809a4212cf11-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.534567 5034 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.534600 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.534632 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8mj5\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-kube-api-access-n8mj5\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.534649 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.539104 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.540011 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.544220 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e52bf9f3-9166-46f6-ba25-809a4212cf11-pod-info" (OuterVolumeSpecName: "pod-info") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.559064 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596" (OuterVolumeSpecName: "persistence") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "pvc-3b2654eb-a501-4239-8429-8fe6029cf596". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.576326 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.595747 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-server-conf" (OuterVolumeSpecName: "server-conf") pod "e52bf9f3-9166-46f6-ba25-809a4212cf11" (UID: "e52bf9f3-9166-46f6-ba25-809a4212cf11"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.635789 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.635824 5034 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e52bf9f3-9166-46f6-ba25-809a4212cf11-server-conf\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.635835 5034 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e52bf9f3-9166-46f6-ba25-809a4212cf11-pod-info\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.635844 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.635879 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3b2654eb-a501-4239-8429-8fe6029cf596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") on node \"crc\" " Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.635893 5034 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e52bf9f3-9166-46f6-ba25-809a4212cf11-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.652289 5034 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.652904 5034 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3b2654eb-a501-4239-8429-8fe6029cf596" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596") on node "crc" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.737095 5034 reconciler_common.go:293] "Volume detached for volume \"pvc-3b2654eb-a501-4239-8429-8fe6029cf596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") on node \"crc\" DevicePath \"\"" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.938949 5034 generic.go:334] "Generic (PLEG): container finished" podID="e52bf9f3-9166-46f6-ba25-809a4212cf11" containerID="b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32" exitCode=0 Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.939123 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e52bf9f3-9166-46f6-ba25-809a4212cf11","Type":"ContainerDied","Data":"b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32"} Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.939165 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e52bf9f3-9166-46f6-ba25-809a4212cf11","Type":"ContainerDied","Data":"86e9a9270f22250629bc87523b8036a32702210fab046c697c818670ef23b1db"} Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.939190 5034 scope.go:117] "RemoveContainer" containerID="b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.939195 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.967194 5034 scope.go:117] "RemoveContainer" containerID="d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc" Jan 05 23:14:56 crc kubenswrapper[5034]: I0105 23:14:56.985683 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.015275 5034 scope.go:117] "RemoveContainer" containerID="b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32" Jan 05 23:14:57 crc kubenswrapper[5034]: E0105 23:14:57.016053 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32\": container with ID starting with b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32 not found: ID does not exist" containerID="b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.016108 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32"} err="failed to get container status \"b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32\": rpc error: code = NotFound desc = could not find container \"b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32\": container with ID starting with b92e5d26db96f586d5b0b00c88b9471be42e003274c63307c894d3325b562e32 not found: ID does not exist" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.016142 5034 scope.go:117] "RemoveContainer" containerID="d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.018640 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 23:14:57 crc kubenswrapper[5034]: E0105 23:14:57.020087 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc\": container with ID starting with d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc not found: ID does not exist" containerID="d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.020168 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc"} err="failed to get container status \"d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc\": rpc error: code = NotFound desc = could not find container \"d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc\": container with ID starting with d1c65374e7522d665b72262673903772cc3161601e60427681834425396cb6bc not found: ID does not exist" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.024063 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 23:14:57 crc kubenswrapper[5034]: E0105 23:14:57.024462 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52bf9f3-9166-46f6-ba25-809a4212cf11" containerName="setup-container" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.024483 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52bf9f3-9166-46f6-ba25-809a4212cf11" containerName="setup-container" Jan 05 23:14:57 crc kubenswrapper[5034]: E0105 23:14:57.024507 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52bf9f3-9166-46f6-ba25-809a4212cf11" containerName="rabbitmq" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.024514 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52bf9f3-9166-46f6-ba25-809a4212cf11" containerName="rabbitmq" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.024682 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52bf9f3-9166-46f6-ba25-809a4212cf11" containerName="rabbitmq" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.025506 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.029519 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.030171 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.030205 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.030241 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.030706 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.030857 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.032821 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.032898 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-p8r7c" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.145075 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/672f6cae-debf-4932-9a31-f10b8ce91e93-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.145244 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b2654eb-a501-4239-8429-8fe6029cf596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.145607 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.145713 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.145786 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqgf\" (UniqueName: \"kubernetes.io/projected/672f6cae-debf-4932-9a31-f10b8ce91e93-kube-api-access-mnqgf\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.145857 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/672f6cae-debf-4932-9a31-f10b8ce91e93-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.145888 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/672f6cae-debf-4932-9a31-f10b8ce91e93-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.145957 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.146026 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.146115 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/672f6cae-debf-4932-9a31-f10b8ce91e93-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.146142 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/672f6cae-debf-4932-9a31-f10b8ce91e93-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.217188 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.247238 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/672f6cae-debf-4932-9a31-f10b8ce91e93-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.247281 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/672f6cae-debf-4932-9a31-f10b8ce91e93-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.247337 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.247358 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.247801 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.248192 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/672f6cae-debf-4932-9a31-f10b8ce91e93-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.248220 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/672f6cae-debf-4932-9a31-f10b8ce91e93-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.248271 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/672f6cae-debf-4932-9a31-f10b8ce91e93-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.248292 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/672f6cae-debf-4932-9a31-f10b8ce91e93-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.248316 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/672f6cae-debf-4932-9a31-f10b8ce91e93-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.248340 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3b2654eb-a501-4239-8429-8fe6029cf596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.248381 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.248430 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.248454 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqgf\" (UniqueName: \"kubernetes.io/projected/672f6cae-debf-4932-9a31-f10b8ce91e93-kube-api-access-mnqgf\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.249504 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.250745 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/672f6cae-debf-4932-9a31-f10b8ce91e93-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.251420 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.251751 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/672f6cae-debf-4932-9a31-f10b8ce91e93-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.252425 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/672f6cae-debf-4932-9a31-f10b8ce91e93-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.253822 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.253854 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3b2654eb-a501-4239-8429-8fe6029cf596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e267d96d81c3e25f1af4c1a5f3d3f0264a40789a12d5114cfd7f9ee2a683f073/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.257181 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/672f6cae-debf-4932-9a31-f10b8ce91e93-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.266270 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqgf\" (UniqueName: \"kubernetes.io/projected/672f6cae-debf-4932-9a31-f10b8ce91e93-kube-api-access-mnqgf\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.282299 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3b2654eb-a501-4239-8429-8fe6029cf596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b2654eb-a501-4239-8429-8fe6029cf596\") pod \"rabbitmq-cell1-server-0\" (UID: \"672f6cae-debf-4932-9a31-f10b8ce91e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.359801 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.811300 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.849888 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c" path="/var/lib/kubelet/pods/50bf4e9b-bfcf-4add-bd8a-79ec64de6a1c/volumes" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.850749 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf778645-f825-4394-a3d2-84b640e6ade8" path="/var/lib/kubelet/pods/cf778645-f825-4394-a3d2-84b640e6ade8/volumes" Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.852492 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e52bf9f3-9166-46f6-ba25-809a4212cf11" path="/var/lib/kubelet/pods/e52bf9f3-9166-46f6-ba25-809a4212cf11/volumes" Jan 05 23:14:57 crc kubenswrapper[5034]: W0105 23:14:57.883682 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod672f6cae_debf_4932_9a31_f10b8ce91e93.slice/crio-c1e230bf25037c0ee526dea278aea3ca987e0a46a753171201f6c92c8e0f8615 WatchSource:0}: Error finding container c1e230bf25037c0ee526dea278aea3ca987e0a46a753171201f6c92c8e0f8615: Status 404 returned error can't find the container with id c1e230bf25037c0ee526dea278aea3ca987e0a46a753171201f6c92c8e0f8615 Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.953938 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"672f6cae-debf-4932-9a31-f10b8ce91e93","Type":"ContainerStarted","Data":"c1e230bf25037c0ee526dea278aea3ca987e0a46a753171201f6c92c8e0f8615"} Jan 05 23:14:57 crc kubenswrapper[5034]: I0105 23:14:57.956529 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"908c96c4-673a-4ee7-a399-cca966e2281b","Type":"ContainerStarted","Data":"32bb9199e40615c3ed28a16ac5a5974260d55860866653d9d1acc3d226005873"} Jan 05 23:14:58 crc kubenswrapper[5034]: I0105 23:14:58.966214 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"908c96c4-673a-4ee7-a399-cca966e2281b","Type":"ContainerStarted","Data":"c8510a50c60ca353368ee7bd50e1ef09a969a7eef314195a247d22ad81658f1c"} Jan 05 23:14:59 crc kubenswrapper[5034]: I0105 23:14:59.976122 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"672f6cae-debf-4932-9a31-f10b8ce91e93","Type":"ContainerStarted","Data":"ffcc22ec17b964629991288407d9504176ab8c4292d1215bc66ee13efa1e49ee"} Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.162167 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq"] Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.163229 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.167335 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.167575 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.169744 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq"] Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.300663 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6gv6\" (UniqueName: \"kubernetes.io/projected/11859d3f-3463-4c43-b37d-b312915a1d46-kube-api-access-m6gv6\") pod \"collect-profiles-29460915-x2nxq\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.300732 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11859d3f-3463-4c43-b37d-b312915a1d46-secret-volume\") pod \"collect-profiles-29460915-x2nxq\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.300761 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11859d3f-3463-4c43-b37d-b312915a1d46-config-volume\") pod \"collect-profiles-29460915-x2nxq\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.402242 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6gv6\" (UniqueName: \"kubernetes.io/projected/11859d3f-3463-4c43-b37d-b312915a1d46-kube-api-access-m6gv6\") pod \"collect-profiles-29460915-x2nxq\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.402346 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11859d3f-3463-4c43-b37d-b312915a1d46-secret-volume\") pod \"collect-profiles-29460915-x2nxq\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.402393 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11859d3f-3463-4c43-b37d-b312915a1d46-config-volume\") pod \"collect-profiles-29460915-x2nxq\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.403778 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11859d3f-3463-4c43-b37d-b312915a1d46-config-volume\") pod \"collect-profiles-29460915-x2nxq\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.409158 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11859d3f-3463-4c43-b37d-b312915a1d46-secret-volume\") pod \"collect-profiles-29460915-x2nxq\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.437402 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6gv6\" (UniqueName: \"kubernetes.io/projected/11859d3f-3463-4c43-b37d-b312915a1d46-kube-api-access-m6gv6\") pod \"collect-profiles-29460915-x2nxq\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.486486 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.768914 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq"] Jan 05 23:15:00 crc kubenswrapper[5034]: I0105 23:15:00.838842 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:15:00 crc kubenswrapper[5034]: E0105 23:15:00.839157 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:15:01 crc kubenswrapper[5034]: I0105 23:15:01.015528 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" event={"ID":"11859d3f-3463-4c43-b37d-b312915a1d46","Type":"ContainerStarted","Data":"4066e90821b479a514932baea3d7c86a9d7e12d503b02af7f3e5282d632f45c6"} Jan 05 23:15:01 crc kubenswrapper[5034]: I0105 23:15:01.016219 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" event={"ID":"11859d3f-3463-4c43-b37d-b312915a1d46","Type":"ContainerStarted","Data":"f4bcc5e309c972ed17d23aa2b47299002363b4c3394cb63133424105eeed3892"} Jan 05 23:15:01 crc kubenswrapper[5034]: I0105 23:15:01.037860 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" podStartSLOduration=1.03783275 podStartE2EDuration="1.03783275s" podCreationTimestamp="2026-01-05 23:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:15:01.037277594 +0000 UTC m=+4993.409277033" watchObservedRunningTime="2026-01-05 23:15:01.03783275 +0000 UTC m=+4993.409832189" Jan 05 23:15:02 crc kubenswrapper[5034]: I0105 23:15:02.024837 5034 generic.go:334] "Generic (PLEG): container finished" podID="11859d3f-3463-4c43-b37d-b312915a1d46" containerID="4066e90821b479a514932baea3d7c86a9d7e12d503b02af7f3e5282d632f45c6" exitCode=0 Jan 05 23:15:02 crc kubenswrapper[5034]: I0105 23:15:02.025380 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" event={"ID":"11859d3f-3463-4c43-b37d-b312915a1d46","Type":"ContainerDied","Data":"4066e90821b479a514932baea3d7c86a9d7e12d503b02af7f3e5282d632f45c6"} Jan 05 23:15:03 crc kubenswrapper[5034]: I0105 23:15:03.357990 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:03 crc kubenswrapper[5034]: I0105 23:15:03.494814 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11859d3f-3463-4c43-b37d-b312915a1d46-config-volume\") pod \"11859d3f-3463-4c43-b37d-b312915a1d46\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " Jan 05 23:15:03 crc kubenswrapper[5034]: I0105 23:15:03.495160 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6gv6\" (UniqueName: \"kubernetes.io/projected/11859d3f-3463-4c43-b37d-b312915a1d46-kube-api-access-m6gv6\") pod \"11859d3f-3463-4c43-b37d-b312915a1d46\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " Jan 05 23:15:03 crc kubenswrapper[5034]: I0105 23:15:03.495513 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11859d3f-3463-4c43-b37d-b312915a1d46-secret-volume\") pod \"11859d3f-3463-4c43-b37d-b312915a1d46\" (UID: \"11859d3f-3463-4c43-b37d-b312915a1d46\") " Jan 05 23:15:03 crc kubenswrapper[5034]: I0105 23:15:03.495866 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11859d3f-3463-4c43-b37d-b312915a1d46-config-volume" (OuterVolumeSpecName: "config-volume") pod "11859d3f-3463-4c43-b37d-b312915a1d46" (UID: "11859d3f-3463-4c43-b37d-b312915a1d46"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:15:03 crc kubenswrapper[5034]: I0105 23:15:03.501240 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11859d3f-3463-4c43-b37d-b312915a1d46-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "11859d3f-3463-4c43-b37d-b312915a1d46" (UID: "11859d3f-3463-4c43-b37d-b312915a1d46"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:15:03 crc kubenswrapper[5034]: I0105 23:15:03.501362 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11859d3f-3463-4c43-b37d-b312915a1d46-kube-api-access-m6gv6" (OuterVolumeSpecName: "kube-api-access-m6gv6") pod "11859d3f-3463-4c43-b37d-b312915a1d46" (UID: "11859d3f-3463-4c43-b37d-b312915a1d46"). InnerVolumeSpecName "kube-api-access-m6gv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:15:03 crc kubenswrapper[5034]: I0105 23:15:03.598506 5034 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11859d3f-3463-4c43-b37d-b312915a1d46-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 23:15:03 crc kubenswrapper[5034]: I0105 23:15:03.598582 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6gv6\" (UniqueName: \"kubernetes.io/projected/11859d3f-3463-4c43-b37d-b312915a1d46-kube-api-access-m6gv6\") on node \"crc\" DevicePath \"\"" Jan 05 23:15:03 crc kubenswrapper[5034]: I0105 23:15:03.598606 5034 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11859d3f-3463-4c43-b37d-b312915a1d46-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 23:15:04 crc kubenswrapper[5034]: I0105 23:15:04.058925 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" event={"ID":"11859d3f-3463-4c43-b37d-b312915a1d46","Type":"ContainerDied","Data":"f4bcc5e309c972ed17d23aa2b47299002363b4c3394cb63133424105eeed3892"} Jan 05 23:15:04 crc kubenswrapper[5034]: I0105 23:15:04.058991 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4bcc5e309c972ed17d23aa2b47299002363b4c3394cb63133424105eeed3892" Jan 05 23:15:04 crc kubenswrapper[5034]: I0105 23:15:04.059062 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460915-x2nxq" Jan 05 23:15:04 crc kubenswrapper[5034]: I0105 23:15:04.453796 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6"] Jan 05 23:15:04 crc kubenswrapper[5034]: I0105 23:15:04.462754 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460870-55ts6"] Jan 05 23:15:05 crc kubenswrapper[5034]: I0105 23:15:05.859267 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="915ee5d3-c02a-4ed3-a0b9-9e9490620077" path="/var/lib/kubelet/pods/915ee5d3-c02a-4ed3-a0b9-9e9490620077/volumes" Jan 05 23:15:13 crc kubenswrapper[5034]: I0105 23:15:13.838175 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:15:13 crc kubenswrapper[5034]: E0105 23:15:13.839198 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:15:27 crc kubenswrapper[5034]: I0105 23:15:27.843995 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:15:27 crc kubenswrapper[5034]: E0105 23:15:27.844789 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:15:31 crc kubenswrapper[5034]: I0105 23:15:31.303196 5034 generic.go:334] "Generic (PLEG): container finished" podID="908c96c4-673a-4ee7-a399-cca966e2281b" containerID="c8510a50c60ca353368ee7bd50e1ef09a969a7eef314195a247d22ad81658f1c" exitCode=0 Jan 05 23:15:31 crc kubenswrapper[5034]: I0105 23:15:31.303318 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"908c96c4-673a-4ee7-a399-cca966e2281b","Type":"ContainerDied","Data":"c8510a50c60ca353368ee7bd50e1ef09a969a7eef314195a247d22ad81658f1c"} Jan 05 23:15:32 crc kubenswrapper[5034]: I0105 23:15:32.314011 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"908c96c4-673a-4ee7-a399-cca966e2281b","Type":"ContainerStarted","Data":"1d47795f09386688d5c0fafa9ce068736d4227561336e94b941c94775871d047"} Jan 05 23:15:32 crc kubenswrapper[5034]: I0105 23:15:32.314724 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 05 23:15:32 crc kubenswrapper[5034]: I0105 23:15:32.315898 5034 generic.go:334] "Generic (PLEG): container finished" podID="672f6cae-debf-4932-9a31-f10b8ce91e93" containerID="ffcc22ec17b964629991288407d9504176ab8c4292d1215bc66ee13efa1e49ee" exitCode=0 Jan 05 23:15:32 crc kubenswrapper[5034]: I0105 23:15:32.315933 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"672f6cae-debf-4932-9a31-f10b8ce91e93","Type":"ContainerDied","Data":"ffcc22ec17b964629991288407d9504176ab8c4292d1215bc66ee13efa1e49ee"} Jan 05 23:15:32 crc kubenswrapper[5034]: I0105 23:15:32.354131 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.354104183 podStartE2EDuration="37.354104183s" podCreationTimestamp="2026-01-05 23:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:15:32.336776531 +0000 UTC m=+5024.708775960" watchObservedRunningTime="2026-01-05 23:15:32.354104183 +0000 UTC m=+5024.726103622" Jan 05 23:15:33 crc kubenswrapper[5034]: I0105 23:15:33.328829 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"672f6cae-debf-4932-9a31-f10b8ce91e93","Type":"ContainerStarted","Data":"76b87321ce954b5be0d5d0fcdd3528c82580c6038bbcae97f82c1bef183ad696"} Jan 05 23:15:33 crc kubenswrapper[5034]: I0105 23:15:33.352350 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.352323869 podStartE2EDuration="37.352323869s" podCreationTimestamp="2026-01-05 23:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:15:33.349165749 +0000 UTC m=+5025.721165178" watchObservedRunningTime="2026-01-05 23:15:33.352323869 +0000 UTC m=+5025.724323308" Jan 05 23:15:37 crc kubenswrapper[5034]: I0105 23:15:37.360130 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:15:41 crc kubenswrapper[5034]: I0105 23:15:41.839284 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:15:41 crc kubenswrapper[5034]: E0105 23:15:41.840008 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:15:46 crc kubenswrapper[5034]: I0105 23:15:46.347497 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 05 23:15:47 crc kubenswrapper[5034]: I0105 23:15:47.363302 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 05 23:15:52 crc kubenswrapper[5034]: I0105 23:15:52.206348 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Jan 05 23:15:52 crc kubenswrapper[5034]: E0105 23:15:52.207405 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11859d3f-3463-4c43-b37d-b312915a1d46" containerName="collect-profiles" Jan 05 23:15:52 crc kubenswrapper[5034]: I0105 23:15:52.207421 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="11859d3f-3463-4c43-b37d-b312915a1d46" containerName="collect-profiles" Jan 05 23:15:52 crc kubenswrapper[5034]: I0105 23:15:52.207605 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="11859d3f-3463-4c43-b37d-b312915a1d46" containerName="collect-profiles" Jan 05 23:15:52 crc kubenswrapper[5034]: I0105 23:15:52.208545 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 05 23:15:52 crc kubenswrapper[5034]: I0105 23:15:52.212011 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-htnfg" Jan 05 23:15:52 crc kubenswrapper[5034]: I0105 23:15:52.226028 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 05 23:15:52 crc kubenswrapper[5034]: I0105 23:15:52.302155 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7lqf\" (UniqueName: \"kubernetes.io/projected/36c488ec-504f-4d15-ae0f-30ca25b06729-kube-api-access-m7lqf\") pod \"mariadb-client-1-default\" (UID: \"36c488ec-504f-4d15-ae0f-30ca25b06729\") " pod="openstack/mariadb-client-1-default" Jan 05 23:15:52 crc kubenswrapper[5034]: I0105 23:15:52.404273 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7lqf\" (UniqueName: \"kubernetes.io/projected/36c488ec-504f-4d15-ae0f-30ca25b06729-kube-api-access-m7lqf\") pod \"mariadb-client-1-default\" (UID: \"36c488ec-504f-4d15-ae0f-30ca25b06729\") " pod="openstack/mariadb-client-1-default" Jan 05 23:15:52 crc kubenswrapper[5034]: I0105 23:15:52.425292 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7lqf\" (UniqueName: \"kubernetes.io/projected/36c488ec-504f-4d15-ae0f-30ca25b06729-kube-api-access-m7lqf\") pod \"mariadb-client-1-default\" (UID: \"36c488ec-504f-4d15-ae0f-30ca25b06729\") " pod="openstack/mariadb-client-1-default" Jan 05 23:15:52 crc kubenswrapper[5034]: I0105 23:15:52.539393 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 05 23:15:53 crc kubenswrapper[5034]: I0105 23:15:53.087655 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 05 23:15:53 crc kubenswrapper[5034]: I0105 23:15:53.099338 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 23:15:53 crc kubenswrapper[5034]: I0105 23:15:53.501413 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"36c488ec-504f-4d15-ae0f-30ca25b06729","Type":"ContainerStarted","Data":"f2693122c470463d1975c5c8bd8197209ce71336b6698d8e7e021af5894d38dc"} Jan 05 23:15:54 crc kubenswrapper[5034]: I0105 23:15:54.511566 5034 generic.go:334] "Generic (PLEG): container finished" podID="36c488ec-504f-4d15-ae0f-30ca25b06729" containerID="b2469943207d24949e1f390b8062a8300a3e114b02c1bf1865affd11ffdb6293" exitCode=0 Jan 05 23:15:54 crc kubenswrapper[5034]: I0105 23:15:54.511618 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"36c488ec-504f-4d15-ae0f-30ca25b06729","Type":"ContainerDied","Data":"b2469943207d24949e1f390b8062a8300a3e114b02c1bf1865affd11ffdb6293"} Jan 05 23:15:54 crc kubenswrapper[5034]: I0105 23:15:54.838586 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:15:54 crc kubenswrapper[5034]: E0105 23:15:54.838796 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:15:55 crc kubenswrapper[5034]: I0105 23:15:55.895935 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 05 23:15:55 crc kubenswrapper[5034]: I0105 23:15:55.924422 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_36c488ec-504f-4d15-ae0f-30ca25b06729/mariadb-client-1-default/0.log" Jan 05 23:15:55 crc kubenswrapper[5034]: I0105 23:15:55.955962 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 05 23:15:55 crc kubenswrapper[5034]: I0105 23:15:55.962460 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 05 23:15:55 crc kubenswrapper[5034]: I0105 23:15:55.965699 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7lqf\" (UniqueName: \"kubernetes.io/projected/36c488ec-504f-4d15-ae0f-30ca25b06729-kube-api-access-m7lqf\") pod \"36c488ec-504f-4d15-ae0f-30ca25b06729\" (UID: \"36c488ec-504f-4d15-ae0f-30ca25b06729\") " Jan 05 23:15:55 crc kubenswrapper[5034]: I0105 23:15:55.972376 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c488ec-504f-4d15-ae0f-30ca25b06729-kube-api-access-m7lqf" (OuterVolumeSpecName: "kube-api-access-m7lqf") pod "36c488ec-504f-4d15-ae0f-30ca25b06729" (UID: "36c488ec-504f-4d15-ae0f-30ca25b06729"). InnerVolumeSpecName "kube-api-access-m7lqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.067719 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7lqf\" (UniqueName: \"kubernetes.io/projected/36c488ec-504f-4d15-ae0f-30ca25b06729-kube-api-access-m7lqf\") on node \"crc\" DevicePath \"\"" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.527583 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2693122c470463d1975c5c8bd8197209ce71336b6698d8e7e021af5894d38dc" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.527671 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.565323 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Jan 05 23:15:56 crc kubenswrapper[5034]: E0105 23:15:56.565879 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c488ec-504f-4d15-ae0f-30ca25b06729" containerName="mariadb-client-1-default" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.565896 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c488ec-504f-4d15-ae0f-30ca25b06729" containerName="mariadb-client-1-default" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.566036 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="36c488ec-504f-4d15-ae0f-30ca25b06729" containerName="mariadb-client-1-default" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.592033 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.592462 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.596233 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-htnfg" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.686910 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl4z4\" (UniqueName: \"kubernetes.io/projected/268189d9-a847-43af-97f0-deff02946456-kube-api-access-hl4z4\") pod \"mariadb-client-2-default\" (UID: \"268189d9-a847-43af-97f0-deff02946456\") " pod="openstack/mariadb-client-2-default" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.788466 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl4z4\" (UniqueName: \"kubernetes.io/projected/268189d9-a847-43af-97f0-deff02946456-kube-api-access-hl4z4\") pod \"mariadb-client-2-default\" (UID: \"268189d9-a847-43af-97f0-deff02946456\") " pod="openstack/mariadb-client-2-default" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.805830 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl4z4\" (UniqueName: \"kubernetes.io/projected/268189d9-a847-43af-97f0-deff02946456-kube-api-access-hl4z4\") pod \"mariadb-client-2-default\" (UID: \"268189d9-a847-43af-97f0-deff02946456\") " pod="openstack/mariadb-client-2-default" Jan 05 23:15:56 crc kubenswrapper[5034]: I0105 23:15:56.910654 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 05 23:15:57 crc kubenswrapper[5034]: I0105 23:15:57.378258 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 05 23:15:57 crc kubenswrapper[5034]: W0105 23:15:57.390817 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod268189d9_a847_43af_97f0_deff02946456.slice/crio-ba043565c03dfdf71e8799cf19a81c17b983806ad8beab300e54a8ea6c365e16 WatchSource:0}: Error finding container ba043565c03dfdf71e8799cf19a81c17b983806ad8beab300e54a8ea6c365e16: Status 404 returned error can't find the container with id ba043565c03dfdf71e8799cf19a81c17b983806ad8beab300e54a8ea6c365e16 Jan 05 23:15:57 crc kubenswrapper[5034]: I0105 23:15:57.538358 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"268189d9-a847-43af-97f0-deff02946456","Type":"ContainerStarted","Data":"ba043565c03dfdf71e8799cf19a81c17b983806ad8beab300e54a8ea6c365e16"} Jan 05 23:15:57 crc kubenswrapper[5034]: I0105 23:15:57.850190 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36c488ec-504f-4d15-ae0f-30ca25b06729" path="/var/lib/kubelet/pods/36c488ec-504f-4d15-ae0f-30ca25b06729/volumes" Jan 05 23:15:58 crc kubenswrapper[5034]: I0105 23:15:58.425542 5034 scope.go:117] "RemoveContainer" containerID="f3ec7983642ffebe89b45dc29276ed0a83f174dafdbda33664d9e0d2b5702f16" Jan 05 23:15:58 crc kubenswrapper[5034]: I0105 23:15:58.547852 5034 generic.go:334] "Generic (PLEG): container finished" podID="268189d9-a847-43af-97f0-deff02946456" containerID="14e897b4f031d677606f0a43622da174f70d25a5a9bbc5668cb46c6eab385a96" exitCode=1 Jan 05 23:15:58 crc kubenswrapper[5034]: I0105 23:15:58.547903 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"268189d9-a847-43af-97f0-deff02946456","Type":"ContainerDied","Data":"14e897b4f031d677606f0a43622da174f70d25a5a9bbc5668cb46c6eab385a96"} Jan 05 23:15:59 crc kubenswrapper[5034]: I0105 23:15:59.905832 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 05 23:15:59 crc kubenswrapper[5034]: I0105 23:15:59.927116 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2-default_268189d9-a847-43af-97f0-deff02946456/mariadb-client-2-default/0.log" Jan 05 23:15:59 crc kubenswrapper[5034]: I0105 23:15:59.963316 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 05 23:15:59 crc kubenswrapper[5034]: I0105 23:15:59.969480 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.041031 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl4z4\" (UniqueName: \"kubernetes.io/projected/268189d9-a847-43af-97f0-deff02946456-kube-api-access-hl4z4\") pod \"268189d9-a847-43af-97f0-deff02946456\" (UID: \"268189d9-a847-43af-97f0-deff02946456\") " Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.047385 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268189d9-a847-43af-97f0-deff02946456-kube-api-access-hl4z4" (OuterVolumeSpecName: "kube-api-access-hl4z4") pod "268189d9-a847-43af-97f0-deff02946456" (UID: "268189d9-a847-43af-97f0-deff02946456"). InnerVolumeSpecName "kube-api-access-hl4z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.142815 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl4z4\" (UniqueName: \"kubernetes.io/projected/268189d9-a847-43af-97f0-deff02946456-kube-api-access-hl4z4\") on node \"crc\" DevicePath \"\"" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.557050 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Jan 05 23:16:00 crc kubenswrapper[5034]: E0105 23:16:00.557441 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268189d9-a847-43af-97f0-deff02946456" containerName="mariadb-client-2-default" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.557461 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="268189d9-a847-43af-97f0-deff02946456" containerName="mariadb-client-2-default" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.557632 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="268189d9-a847-43af-97f0-deff02946456" containerName="mariadb-client-2-default" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.558187 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.567301 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba043565c03dfdf71e8799cf19a81c17b983806ad8beab300e54a8ea6c365e16" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.567367 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.569222 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.651545 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t28dm\" (UniqueName: \"kubernetes.io/projected/39075be1-1b24-4910-9cdb-c03d148c9fd0-kube-api-access-t28dm\") pod \"mariadb-client-1\" (UID: \"39075be1-1b24-4910-9cdb-c03d148c9fd0\") " pod="openstack/mariadb-client-1" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.753701 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t28dm\" (UniqueName: \"kubernetes.io/projected/39075be1-1b24-4910-9cdb-c03d148c9fd0-kube-api-access-t28dm\") pod \"mariadb-client-1\" (UID: \"39075be1-1b24-4910-9cdb-c03d148c9fd0\") " pod="openstack/mariadb-client-1" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.770557 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t28dm\" (UniqueName: \"kubernetes.io/projected/39075be1-1b24-4910-9cdb-c03d148c9fd0-kube-api-access-t28dm\") pod \"mariadb-client-1\" (UID: \"39075be1-1b24-4910-9cdb-c03d148c9fd0\") " pod="openstack/mariadb-client-1" Jan 05 23:16:00 crc kubenswrapper[5034]: I0105 23:16:00.883887 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 05 23:16:01 crc kubenswrapper[5034]: I0105 23:16:01.209176 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Jan 05 23:16:01 crc kubenswrapper[5034]: W0105 23:16:01.212515 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39075be1_1b24_4910_9cdb_c03d148c9fd0.slice/crio-97890e26646044e8472bbda9574427121cd5833bd618e6dd751d64a99a6d0709 WatchSource:0}: Error finding container 97890e26646044e8472bbda9574427121cd5833bd618e6dd751d64a99a6d0709: Status 404 returned error can't find the container with id 97890e26646044e8472bbda9574427121cd5833bd618e6dd751d64a99a6d0709 Jan 05 23:16:01 crc kubenswrapper[5034]: I0105 23:16:01.576723 5034 generic.go:334] "Generic (PLEG): container finished" podID="39075be1-1b24-4910-9cdb-c03d148c9fd0" containerID="53a0409ee5c090428d20604d4c0250cb18c472e2fb758834b5f1b264a5a1adad" exitCode=0 Jan 05 23:16:01 crc kubenswrapper[5034]: I0105 23:16:01.576777 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"39075be1-1b24-4910-9cdb-c03d148c9fd0","Type":"ContainerDied","Data":"53a0409ee5c090428d20604d4c0250cb18c472e2fb758834b5f1b264a5a1adad"} Jan 05 23:16:01 crc kubenswrapper[5034]: I0105 23:16:01.576808 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"39075be1-1b24-4910-9cdb-c03d148c9fd0","Type":"ContainerStarted","Data":"97890e26646044e8472bbda9574427121cd5833bd618e6dd751d64a99a6d0709"} Jan 05 23:16:01 crc kubenswrapper[5034]: I0105 23:16:01.849894 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268189d9-a847-43af-97f0-deff02946456" path="/var/lib/kubelet/pods/268189d9-a847-43af-97f0-deff02946456/volumes" Jan 05 23:16:02 crc kubenswrapper[5034]: I0105 23:16:02.915040 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 05 23:16:02 crc kubenswrapper[5034]: I0105 23:16:02.933268 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_39075be1-1b24-4910-9cdb-c03d148c9fd0/mariadb-client-1/0.log" Jan 05 23:16:02 crc kubenswrapper[5034]: I0105 23:16:02.965254 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Jan 05 23:16:02 crc kubenswrapper[5034]: I0105 23:16:02.970559 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.011856 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t28dm\" (UniqueName: \"kubernetes.io/projected/39075be1-1b24-4910-9cdb-c03d148c9fd0-kube-api-access-t28dm\") pod \"39075be1-1b24-4910-9cdb-c03d148c9fd0\" (UID: \"39075be1-1b24-4910-9cdb-c03d148c9fd0\") " Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.017408 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39075be1-1b24-4910-9cdb-c03d148c9fd0-kube-api-access-t28dm" (OuterVolumeSpecName: "kube-api-access-t28dm") pod "39075be1-1b24-4910-9cdb-c03d148c9fd0" (UID: "39075be1-1b24-4910-9cdb-c03d148c9fd0"). InnerVolumeSpecName "kube-api-access-t28dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.114463 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t28dm\" (UniqueName: \"kubernetes.io/projected/39075be1-1b24-4910-9cdb-c03d148c9fd0-kube-api-access-t28dm\") on node \"crc\" DevicePath \"\"" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.546470 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Jan 05 23:16:03 crc kubenswrapper[5034]: E0105 23:16:03.547274 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39075be1-1b24-4910-9cdb-c03d148c9fd0" containerName="mariadb-client-1" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.547317 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="39075be1-1b24-4910-9cdb-c03d148c9fd0" containerName="mariadb-client-1" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.547615 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="39075be1-1b24-4910-9cdb-c03d148c9fd0" containerName="mariadb-client-1" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.548672 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.560130 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.601497 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97890e26646044e8472bbda9574427121cd5833bd618e6dd751d64a99a6d0709" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.601587 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.623721 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5djh6\" (UniqueName: \"kubernetes.io/projected/8a532abf-212d-487b-bb57-6f955a00ffd2-kube-api-access-5djh6\") pod \"mariadb-client-4-default\" (UID: \"8a532abf-212d-487b-bb57-6f955a00ffd2\") " pod="openstack/mariadb-client-4-default" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.725110 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5djh6\" (UniqueName: \"kubernetes.io/projected/8a532abf-212d-487b-bb57-6f955a00ffd2-kube-api-access-5djh6\") pod \"mariadb-client-4-default\" (UID: \"8a532abf-212d-487b-bb57-6f955a00ffd2\") " pod="openstack/mariadb-client-4-default" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.750624 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5djh6\" (UniqueName: \"kubernetes.io/projected/8a532abf-212d-487b-bb57-6f955a00ffd2-kube-api-access-5djh6\") pod \"mariadb-client-4-default\" (UID: \"8a532abf-212d-487b-bb57-6f955a00ffd2\") " pod="openstack/mariadb-client-4-default" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.849613 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39075be1-1b24-4910-9cdb-c03d148c9fd0" path="/var/lib/kubelet/pods/39075be1-1b24-4910-9cdb-c03d148c9fd0/volumes" Jan 05 23:16:03 crc kubenswrapper[5034]: I0105 23:16:03.901276 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 05 23:16:04 crc kubenswrapper[5034]: I0105 23:16:04.212413 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 05 23:16:04 crc kubenswrapper[5034]: W0105 23:16:04.214501 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a532abf_212d_487b_bb57_6f955a00ffd2.slice/crio-89849ff68643fca27aeb060fe7cf2060dd79383a401847bb9c02e98da904f201 WatchSource:0}: Error finding container 89849ff68643fca27aeb060fe7cf2060dd79383a401847bb9c02e98da904f201: Status 404 returned error can't find the container with id 89849ff68643fca27aeb060fe7cf2060dd79383a401847bb9c02e98da904f201 Jan 05 23:16:04 crc kubenswrapper[5034]: I0105 23:16:04.611726 5034 generic.go:334] "Generic (PLEG): container finished" podID="8a532abf-212d-487b-bb57-6f955a00ffd2" containerID="856634d8cf74ebde242c272d9038d385ac178182c9251bd1ee70ae17e41cef64" exitCode=0 Jan 05 23:16:04 crc kubenswrapper[5034]: I0105 23:16:04.611791 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"8a532abf-212d-487b-bb57-6f955a00ffd2","Type":"ContainerDied","Data":"856634d8cf74ebde242c272d9038d385ac178182c9251bd1ee70ae17e41cef64"} Jan 05 23:16:04 crc kubenswrapper[5034]: I0105 23:16:04.611831 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"8a532abf-212d-487b-bb57-6f955a00ffd2","Type":"ContainerStarted","Data":"89849ff68643fca27aeb060fe7cf2060dd79383a401847bb9c02e98da904f201"} Jan 05 23:16:06 crc kubenswrapper[5034]: I0105 23:16:06.054174 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 05 23:16:06 crc kubenswrapper[5034]: I0105 23:16:06.079571 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_8a532abf-212d-487b-bb57-6f955a00ffd2/mariadb-client-4-default/0.log" Jan 05 23:16:06 crc kubenswrapper[5034]: I0105 23:16:06.115477 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 05 23:16:06 crc kubenswrapper[5034]: I0105 23:16:06.124705 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 05 23:16:06 crc kubenswrapper[5034]: I0105 23:16:06.171881 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5djh6\" (UniqueName: \"kubernetes.io/projected/8a532abf-212d-487b-bb57-6f955a00ffd2-kube-api-access-5djh6\") pod \"8a532abf-212d-487b-bb57-6f955a00ffd2\" (UID: \"8a532abf-212d-487b-bb57-6f955a00ffd2\") " Jan 05 23:16:06 crc kubenswrapper[5034]: I0105 23:16:06.180430 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a532abf-212d-487b-bb57-6f955a00ffd2-kube-api-access-5djh6" (OuterVolumeSpecName: "kube-api-access-5djh6") pod "8a532abf-212d-487b-bb57-6f955a00ffd2" (UID: "8a532abf-212d-487b-bb57-6f955a00ffd2"). InnerVolumeSpecName "kube-api-access-5djh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:16:06 crc kubenswrapper[5034]: I0105 23:16:06.274024 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5djh6\" (UniqueName: \"kubernetes.io/projected/8a532abf-212d-487b-bb57-6f955a00ffd2-kube-api-access-5djh6\") on node \"crc\" DevicePath \"\"" Jan 05 23:16:06 crc kubenswrapper[5034]: I0105 23:16:06.637592 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89849ff68643fca27aeb060fe7cf2060dd79383a401847bb9c02e98da904f201" Jan 05 23:16:06 crc kubenswrapper[5034]: I0105 23:16:06.637714 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 05 23:16:07 crc kubenswrapper[5034]: I0105 23:16:07.849108 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a532abf-212d-487b-bb57-6f955a00ffd2" path="/var/lib/kubelet/pods/8a532abf-212d-487b-bb57-6f955a00ffd2/volumes" Jan 05 23:16:09 crc kubenswrapper[5034]: I0105 23:16:09.838449 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:16:09 crc kubenswrapper[5034]: E0105 23:16:09.839003 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.069769 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Jan 05 23:16:10 crc kubenswrapper[5034]: E0105 23:16:10.071530 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a532abf-212d-487b-bb57-6f955a00ffd2" containerName="mariadb-client-4-default" Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.071562 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a532abf-212d-487b-bb57-6f955a00ffd2" containerName="mariadb-client-4-default" Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.071775 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a532abf-212d-487b-bb57-6f955a00ffd2" containerName="mariadb-client-4-default" Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.073587 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.077342 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-htnfg" Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.080528 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.237135 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbhht\" (UniqueName: \"kubernetes.io/projected/f9fd1b1a-d079-4b47-8d97-a066e467094e-kube-api-access-xbhht\") pod \"mariadb-client-5-default\" (UID: \"f9fd1b1a-d079-4b47-8d97-a066e467094e\") " pod="openstack/mariadb-client-5-default" Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.339268 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbhht\" (UniqueName: \"kubernetes.io/projected/f9fd1b1a-d079-4b47-8d97-a066e467094e-kube-api-access-xbhht\") pod \"mariadb-client-5-default\" (UID: \"f9fd1b1a-d079-4b47-8d97-a066e467094e\") " pod="openstack/mariadb-client-5-default" Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.364825 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbhht\" (UniqueName: \"kubernetes.io/projected/f9fd1b1a-d079-4b47-8d97-a066e467094e-kube-api-access-xbhht\") pod \"mariadb-client-5-default\" (UID: \"f9fd1b1a-d079-4b47-8d97-a066e467094e\") " pod="openstack/mariadb-client-5-default" Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.396892 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 05 23:16:10 crc kubenswrapper[5034]: I0105 23:16:10.730979 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 05 23:16:11 crc kubenswrapper[5034]: I0105 23:16:11.698123 5034 generic.go:334] "Generic (PLEG): container finished" podID="f9fd1b1a-d079-4b47-8d97-a066e467094e" containerID="52b7231ba46d22ae0b8e17b51d8aff5a1462b5d43111674dc9a2afcefd8ae7eb" exitCode=0 Jan 05 23:16:11 crc kubenswrapper[5034]: I0105 23:16:11.698196 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"f9fd1b1a-d079-4b47-8d97-a066e467094e","Type":"ContainerDied","Data":"52b7231ba46d22ae0b8e17b51d8aff5a1462b5d43111674dc9a2afcefd8ae7eb"} Jan 05 23:16:11 crc kubenswrapper[5034]: I0105 23:16:11.698605 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"f9fd1b1a-d079-4b47-8d97-a066e467094e","Type":"ContainerStarted","Data":"1f1b9c4dc91421704c7e9159dce03aed7aa937d299f7bb0973008e1e8149db8d"} Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.123336 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.150107 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_f9fd1b1a-d079-4b47-8d97-a066e467094e/mariadb-client-5-default/0.log" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.188440 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbhht\" (UniqueName: \"kubernetes.io/projected/f9fd1b1a-d079-4b47-8d97-a066e467094e-kube-api-access-xbhht\") pod \"f9fd1b1a-d079-4b47-8d97-a066e467094e\" (UID: \"f9fd1b1a-d079-4b47-8d97-a066e467094e\") " Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.196892 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9fd1b1a-d079-4b47-8d97-a066e467094e-kube-api-access-xbhht" (OuterVolumeSpecName: "kube-api-access-xbhht") pod "f9fd1b1a-d079-4b47-8d97-a066e467094e" (UID: "f9fd1b1a-d079-4b47-8d97-a066e467094e"). InnerVolumeSpecName "kube-api-access-xbhht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.196996 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.207847 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.291195 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbhht\" (UniqueName: \"kubernetes.io/projected/f9fd1b1a-d079-4b47-8d97-a066e467094e-kube-api-access-xbhht\") on node \"crc\" DevicePath \"\"" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.341809 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Jan 05 23:16:13 crc kubenswrapper[5034]: E0105 23:16:13.342787 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fd1b1a-d079-4b47-8d97-a066e467094e" containerName="mariadb-client-5-default" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.342822 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fd1b1a-d079-4b47-8d97-a066e467094e" containerName="mariadb-client-5-default" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.343211 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9fd1b1a-d079-4b47-8d97-a066e467094e" containerName="mariadb-client-5-default" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.344336 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.355300 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.502629 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n79j\" (UniqueName: \"kubernetes.io/projected/daf4fb27-d75b-4442-8cb7-48845f3feaf4-kube-api-access-4n79j\") pod \"mariadb-client-6-default\" (UID: \"daf4fb27-d75b-4442-8cb7-48845f3feaf4\") " pod="openstack/mariadb-client-6-default" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.604578 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n79j\" (UniqueName: \"kubernetes.io/projected/daf4fb27-d75b-4442-8cb7-48845f3feaf4-kube-api-access-4n79j\") pod \"mariadb-client-6-default\" (UID: \"daf4fb27-d75b-4442-8cb7-48845f3feaf4\") " pod="openstack/mariadb-client-6-default" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.630357 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n79j\" (UniqueName: \"kubernetes.io/projected/daf4fb27-d75b-4442-8cb7-48845f3feaf4-kube-api-access-4n79j\") pod \"mariadb-client-6-default\" (UID: \"daf4fb27-d75b-4442-8cb7-48845f3feaf4\") " pod="openstack/mariadb-client-6-default" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.702012 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.714436 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f1b9c4dc91421704c7e9159dce03aed7aa937d299f7bb0973008e1e8149db8d" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.714506 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 05 23:16:13 crc kubenswrapper[5034]: I0105 23:16:13.853989 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9fd1b1a-d079-4b47-8d97-a066e467094e" path="/var/lib/kubelet/pods/f9fd1b1a-d079-4b47-8d97-a066e467094e/volumes" Jan 05 23:16:14 crc kubenswrapper[5034]: I0105 23:16:14.226130 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 05 23:16:14 crc kubenswrapper[5034]: W0105 23:16:14.231129 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaf4fb27_d75b_4442_8cb7_48845f3feaf4.slice/crio-9ec42320ec5b9ac0b297685917ca7a7af528ab4b1a962a6282f810f673a6ce2c WatchSource:0}: Error finding container 9ec42320ec5b9ac0b297685917ca7a7af528ab4b1a962a6282f810f673a6ce2c: Status 404 returned error can't find the container with id 9ec42320ec5b9ac0b297685917ca7a7af528ab4b1a962a6282f810f673a6ce2c Jan 05 23:16:14 crc kubenswrapper[5034]: I0105 23:16:14.723500 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"daf4fb27-d75b-4442-8cb7-48845f3feaf4","Type":"ContainerStarted","Data":"59f380a4971477f938e43e577860c7c998e6cc5066571797bda869d1ec0c46dd"} Jan 05 23:16:14 crc kubenswrapper[5034]: I0105 23:16:14.723979 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"daf4fb27-d75b-4442-8cb7-48845f3feaf4","Type":"ContainerStarted","Data":"9ec42320ec5b9ac0b297685917ca7a7af528ab4b1a962a6282f810f673a6ce2c"} Jan 05 23:16:14 crc kubenswrapper[5034]: I0105 23:16:14.742868 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.7428421539999999 podStartE2EDuration="1.742842154s" podCreationTimestamp="2026-01-05 23:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:16:14.739411317 +0000 UTC m=+5067.111410776" watchObservedRunningTime="2026-01-05 23:16:14.742842154 +0000 UTC m=+5067.114841613" Jan 05 23:16:16 crc kubenswrapper[5034]: I0105 23:16:16.164463 5034 generic.go:334] "Generic (PLEG): container finished" podID="daf4fb27-d75b-4442-8cb7-48845f3feaf4" containerID="59f380a4971477f938e43e577860c7c998e6cc5066571797bda869d1ec0c46dd" exitCode=1 Jan 05 23:16:16 crc kubenswrapper[5034]: I0105 23:16:16.169624 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"daf4fb27-d75b-4442-8cb7-48845f3feaf4","Type":"ContainerDied","Data":"59f380a4971477f938e43e577860c7c998e6cc5066571797bda869d1ec0c46dd"} Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.619480 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.664945 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.670320 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.774168 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n79j\" (UniqueName: \"kubernetes.io/projected/daf4fb27-d75b-4442-8cb7-48845f3feaf4-kube-api-access-4n79j\") pod \"daf4fb27-d75b-4442-8cb7-48845f3feaf4\" (UID: \"daf4fb27-d75b-4442-8cb7-48845f3feaf4\") " Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.781352 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf4fb27-d75b-4442-8cb7-48845f3feaf4-kube-api-access-4n79j" (OuterVolumeSpecName: "kube-api-access-4n79j") pod "daf4fb27-d75b-4442-8cb7-48845f3feaf4" (UID: "daf4fb27-d75b-4442-8cb7-48845f3feaf4"). InnerVolumeSpecName "kube-api-access-4n79j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.850504 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf4fb27-d75b-4442-8cb7-48845f3feaf4" path="/var/lib/kubelet/pods/daf4fb27-d75b-4442-8cb7-48845f3feaf4/volumes" Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.851676 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Jan 05 23:16:17 crc kubenswrapper[5034]: E0105 23:16:17.854620 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf4fb27-d75b-4442-8cb7-48845f3feaf4" containerName="mariadb-client-6-default" Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.854665 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf4fb27-d75b-4442-8cb7-48845f3feaf4" containerName="mariadb-client-6-default" Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.855053 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf4fb27-d75b-4442-8cb7-48845f3feaf4" containerName="mariadb-client-6-default" Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.855753 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.855895 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.878954 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n79j\" (UniqueName: \"kubernetes.io/projected/daf4fb27-d75b-4442-8cb7-48845f3feaf4-kube-api-access-4n79j\") on node \"crc\" DevicePath \"\"" Jan 05 23:16:17 crc kubenswrapper[5034]: I0105 23:16:17.981136 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtpn\" (UniqueName: \"kubernetes.io/projected/15b0a555-d546-4ecb-92e5-7790c90cf180-kube-api-access-wbtpn\") pod \"mariadb-client-7-default\" (UID: \"15b0a555-d546-4ecb-92e5-7790c90cf180\") " pod="openstack/mariadb-client-7-default" Jan 05 23:16:18 crc kubenswrapper[5034]: I0105 23:16:18.082358 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbtpn\" (UniqueName: \"kubernetes.io/projected/15b0a555-d546-4ecb-92e5-7790c90cf180-kube-api-access-wbtpn\") pod \"mariadb-client-7-default\" (UID: \"15b0a555-d546-4ecb-92e5-7790c90cf180\") " pod="openstack/mariadb-client-7-default" Jan 05 23:16:18 crc kubenswrapper[5034]: I0105 23:16:18.103023 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbtpn\" (UniqueName: \"kubernetes.io/projected/15b0a555-d546-4ecb-92e5-7790c90cf180-kube-api-access-wbtpn\") pod \"mariadb-client-7-default\" (UID: \"15b0a555-d546-4ecb-92e5-7790c90cf180\") " pod="openstack/mariadb-client-7-default" Jan 05 23:16:18 crc kubenswrapper[5034]: I0105 23:16:18.178812 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 05 23:16:18 crc kubenswrapper[5034]: I0105 23:16:18.200264 5034 scope.go:117] "RemoveContainer" containerID="59f380a4971477f938e43e577860c7c998e6cc5066571797bda869d1ec0c46dd" Jan 05 23:16:18 crc kubenswrapper[5034]: I0105 23:16:18.200363 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 05 23:16:18 crc kubenswrapper[5034]: I0105 23:16:18.500403 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 05 23:16:18 crc kubenswrapper[5034]: W0105 23:16:18.506793 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15b0a555_d546_4ecb_92e5_7790c90cf180.slice/crio-161abe1a5ad8117a6fdd37f9d076c36ef01fc09210b48bbb426ad75f7e691f36 WatchSource:0}: Error finding container 161abe1a5ad8117a6fdd37f9d076c36ef01fc09210b48bbb426ad75f7e691f36: Status 404 returned error can't find the container with id 161abe1a5ad8117a6fdd37f9d076c36ef01fc09210b48bbb426ad75f7e691f36 Jan 05 23:16:19 crc kubenswrapper[5034]: I0105 23:16:19.209418 5034 generic.go:334] "Generic (PLEG): container finished" podID="15b0a555-d546-4ecb-92e5-7790c90cf180" containerID="4b1b1cf0dc47123b2a4de4911aa4b46bdfba020a169d383096633fc365328be3" exitCode=0 Jan 05 23:16:19 crc kubenswrapper[5034]: I0105 23:16:19.209537 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"15b0a555-d546-4ecb-92e5-7790c90cf180","Type":"ContainerDied","Data":"4b1b1cf0dc47123b2a4de4911aa4b46bdfba020a169d383096633fc365328be3"} Jan 05 23:16:19 crc kubenswrapper[5034]: I0105 23:16:19.209718 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"15b0a555-d546-4ecb-92e5-7790c90cf180","Type":"ContainerStarted","Data":"161abe1a5ad8117a6fdd37f9d076c36ef01fc09210b48bbb426ad75f7e691f36"} Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.614395 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.637967 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_15b0a555-d546-4ecb-92e5-7790c90cf180/mariadb-client-7-default/0.log" Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.676830 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.684372 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.735229 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbtpn\" (UniqueName: \"kubernetes.io/projected/15b0a555-d546-4ecb-92e5-7790c90cf180-kube-api-access-wbtpn\") pod \"15b0a555-d546-4ecb-92e5-7790c90cf180\" (UID: \"15b0a555-d546-4ecb-92e5-7790c90cf180\") " Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.742490 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b0a555-d546-4ecb-92e5-7790c90cf180-kube-api-access-wbtpn" (OuterVolumeSpecName: "kube-api-access-wbtpn") pod "15b0a555-d546-4ecb-92e5-7790c90cf180" (UID: "15b0a555-d546-4ecb-92e5-7790c90cf180"). InnerVolumeSpecName "kube-api-access-wbtpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.838163 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbtpn\" (UniqueName: \"kubernetes.io/projected/15b0a555-d546-4ecb-92e5-7790c90cf180-kube-api-access-wbtpn\") on node \"crc\" DevicePath \"\"" Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.870847 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Jan 05 23:16:20 crc kubenswrapper[5034]: E0105 23:16:20.871386 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b0a555-d546-4ecb-92e5-7790c90cf180" containerName="mariadb-client-7-default" Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.871416 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b0a555-d546-4ecb-92e5-7790c90cf180" containerName="mariadb-client-7-default" Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.871608 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b0a555-d546-4ecb-92e5-7790c90cf180" containerName="mariadb-client-7-default" Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.872570 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 05 23:16:20 crc kubenswrapper[5034]: I0105 23:16:20.885068 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Jan 05 23:16:21 crc kubenswrapper[5034]: I0105 23:16:21.042253 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spp4f\" (UniqueName: \"kubernetes.io/projected/9cfad38f-4b66-4e8e-9606-b22662652b01-kube-api-access-spp4f\") pod \"mariadb-client-2\" (UID: \"9cfad38f-4b66-4e8e-9606-b22662652b01\") " pod="openstack/mariadb-client-2" Jan 05 23:16:21 crc kubenswrapper[5034]: I0105 23:16:21.144919 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spp4f\" (UniqueName: \"kubernetes.io/projected/9cfad38f-4b66-4e8e-9606-b22662652b01-kube-api-access-spp4f\") pod \"mariadb-client-2\" (UID: \"9cfad38f-4b66-4e8e-9606-b22662652b01\") " pod="openstack/mariadb-client-2" Jan 05 23:16:21 crc kubenswrapper[5034]: I0105 23:16:21.179823 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spp4f\" (UniqueName: \"kubernetes.io/projected/9cfad38f-4b66-4e8e-9606-b22662652b01-kube-api-access-spp4f\") pod \"mariadb-client-2\" (UID: \"9cfad38f-4b66-4e8e-9606-b22662652b01\") " pod="openstack/mariadb-client-2" Jan 05 23:16:21 crc kubenswrapper[5034]: I0105 23:16:21.188624 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 05 23:16:21 crc kubenswrapper[5034]: I0105 23:16:21.231810 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="161abe1a5ad8117a6fdd37f9d076c36ef01fc09210b48bbb426ad75f7e691f36" Jan 05 23:16:21 crc kubenswrapper[5034]: I0105 23:16:21.231921 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 05 23:16:21 crc kubenswrapper[5034]: I0105 23:16:21.588781 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Jan 05 23:16:21 crc kubenswrapper[5034]: W0105 23:16:21.596993 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cfad38f_4b66_4e8e_9606_b22662652b01.slice/crio-71ee614fa0afedffdbaa07e82e754d5470729b1198260075356c98d170cfb17b WatchSource:0}: Error finding container 71ee614fa0afedffdbaa07e82e754d5470729b1198260075356c98d170cfb17b: Status 404 returned error can't find the container with id 71ee614fa0afedffdbaa07e82e754d5470729b1198260075356c98d170cfb17b Jan 05 23:16:21 crc kubenswrapper[5034]: I0105 23:16:21.838851 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:16:21 crc kubenswrapper[5034]: E0105 23:16:21.840238 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:16:21 crc kubenswrapper[5034]: I0105 23:16:21.873810 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15b0a555-d546-4ecb-92e5-7790c90cf180" path="/var/lib/kubelet/pods/15b0a555-d546-4ecb-92e5-7790c90cf180/volumes" Jan 05 23:16:22 crc kubenswrapper[5034]: I0105 23:16:22.245117 5034 generic.go:334] "Generic (PLEG): container finished" podID="9cfad38f-4b66-4e8e-9606-b22662652b01" containerID="475e0f00e550cff04d67ec2f867036653fac16507e1ddab5be0a6d42436d96c6" exitCode=0 Jan 05 23:16:22 crc kubenswrapper[5034]: I0105 23:16:22.245185 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"9cfad38f-4b66-4e8e-9606-b22662652b01","Type":"ContainerDied","Data":"475e0f00e550cff04d67ec2f867036653fac16507e1ddab5be0a6d42436d96c6"} Jan 05 23:16:22 crc kubenswrapper[5034]: I0105 23:16:22.245233 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"9cfad38f-4b66-4e8e-9606-b22662652b01","Type":"ContainerStarted","Data":"71ee614fa0afedffdbaa07e82e754d5470729b1198260075356c98d170cfb17b"} Jan 05 23:16:23 crc kubenswrapper[5034]: I0105 23:16:23.685565 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 05 23:16:23 crc kubenswrapper[5034]: I0105 23:16:23.710407 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_9cfad38f-4b66-4e8e-9606-b22662652b01/mariadb-client-2/0.log" Jan 05 23:16:23 crc kubenswrapper[5034]: I0105 23:16:23.738160 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Jan 05 23:16:23 crc kubenswrapper[5034]: I0105 23:16:23.744200 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Jan 05 23:16:23 crc kubenswrapper[5034]: I0105 23:16:23.799434 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spp4f\" (UniqueName: \"kubernetes.io/projected/9cfad38f-4b66-4e8e-9606-b22662652b01-kube-api-access-spp4f\") pod \"9cfad38f-4b66-4e8e-9606-b22662652b01\" (UID: \"9cfad38f-4b66-4e8e-9606-b22662652b01\") " Jan 05 23:16:23 crc kubenswrapper[5034]: I0105 23:16:23.807485 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cfad38f-4b66-4e8e-9606-b22662652b01-kube-api-access-spp4f" (OuterVolumeSpecName: "kube-api-access-spp4f") pod "9cfad38f-4b66-4e8e-9606-b22662652b01" (UID: "9cfad38f-4b66-4e8e-9606-b22662652b01"). InnerVolumeSpecName "kube-api-access-spp4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:16:23 crc kubenswrapper[5034]: I0105 23:16:23.850462 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cfad38f-4b66-4e8e-9606-b22662652b01" path="/var/lib/kubelet/pods/9cfad38f-4b66-4e8e-9606-b22662652b01/volumes" Jan 05 23:16:23 crc kubenswrapper[5034]: I0105 23:16:23.901962 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spp4f\" (UniqueName: \"kubernetes.io/projected/9cfad38f-4b66-4e8e-9606-b22662652b01-kube-api-access-spp4f\") on node \"crc\" DevicePath \"\"" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.150727 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dhgpz"] Jan 05 23:16:24 crc kubenswrapper[5034]: E0105 23:16:24.151283 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfad38f-4b66-4e8e-9606-b22662652b01" containerName="mariadb-client-2" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.151310 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfad38f-4b66-4e8e-9606-b22662652b01" containerName="mariadb-client-2" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.151493 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cfad38f-4b66-4e8e-9606-b22662652b01" containerName="mariadb-client-2" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.153429 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.164210 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhgpz"] Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.262393 5034 scope.go:117] "RemoveContainer" containerID="475e0f00e550cff04d67ec2f867036653fac16507e1ddab5be0a6d42436d96c6" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.262461 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.308634 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-utilities\") pod \"community-operators-dhgpz\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.308789 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-catalog-content\") pod \"community-operators-dhgpz\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.308840 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcmlc\" (UniqueName: \"kubernetes.io/projected/913b5995-307c-4979-a7a5-56e8c3040e40-kube-api-access-fcmlc\") pod \"community-operators-dhgpz\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.410665 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-utilities\") pod \"community-operators-dhgpz\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.411184 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-catalog-content\") pod \"community-operators-dhgpz\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.411227 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcmlc\" (UniqueName: \"kubernetes.io/projected/913b5995-307c-4979-a7a5-56e8c3040e40-kube-api-access-fcmlc\") pod \"community-operators-dhgpz\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.411290 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-utilities\") pod \"community-operators-dhgpz\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.412050 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-catalog-content\") pod \"community-operators-dhgpz\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.432567 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcmlc\" (UniqueName: \"kubernetes.io/projected/913b5995-307c-4979-a7a5-56e8c3040e40-kube-api-access-fcmlc\") pod \"community-operators-dhgpz\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:24 crc kubenswrapper[5034]: I0105 23:16:24.476108 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:25 crc kubenswrapper[5034]: I0105 23:16:25.031461 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhgpz"] Jan 05 23:16:25 crc kubenswrapper[5034]: W0105 23:16:25.036549 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod913b5995_307c_4979_a7a5_56e8c3040e40.slice/crio-934738ff3ab3c1a19d00e3de4817f1e2ef6b326153d40ebddfc244b1b31a4faa WatchSource:0}: Error finding container 934738ff3ab3c1a19d00e3de4817f1e2ef6b326153d40ebddfc244b1b31a4faa: Status 404 returned error can't find the container with id 934738ff3ab3c1a19d00e3de4817f1e2ef6b326153d40ebddfc244b1b31a4faa Jan 05 23:16:25 crc kubenswrapper[5034]: I0105 23:16:25.272401 5034 generic.go:334] "Generic (PLEG): container finished" podID="913b5995-307c-4979-a7a5-56e8c3040e40" containerID="d498e9c7a6b564ccfdd023877ccf32773ff6ca0e020971f96008010bc8a589b4" exitCode=0 Jan 05 23:16:25 crc kubenswrapper[5034]: I0105 23:16:25.273406 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhgpz" event={"ID":"913b5995-307c-4979-a7a5-56e8c3040e40","Type":"ContainerDied","Data":"d498e9c7a6b564ccfdd023877ccf32773ff6ca0e020971f96008010bc8a589b4"} Jan 05 23:16:25 crc kubenswrapper[5034]: I0105 23:16:25.273443 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhgpz" event={"ID":"913b5995-307c-4979-a7a5-56e8c3040e40","Type":"ContainerStarted","Data":"934738ff3ab3c1a19d00e3de4817f1e2ef6b326153d40ebddfc244b1b31a4faa"} Jan 05 23:16:26 crc kubenswrapper[5034]: I0105 23:16:26.284235 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhgpz" event={"ID":"913b5995-307c-4979-a7a5-56e8c3040e40","Type":"ContainerStarted","Data":"9a98ed559da3cedf9683e385596d8a4e49eeacf36603ce804050bcae36f0a0b1"} Jan 05 23:16:27 crc kubenswrapper[5034]: I0105 23:16:27.298306 5034 generic.go:334] "Generic (PLEG): container finished" podID="913b5995-307c-4979-a7a5-56e8c3040e40" containerID="9a98ed559da3cedf9683e385596d8a4e49eeacf36603ce804050bcae36f0a0b1" exitCode=0 Jan 05 23:16:27 crc kubenswrapper[5034]: I0105 23:16:27.298380 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhgpz" event={"ID":"913b5995-307c-4979-a7a5-56e8c3040e40","Type":"ContainerDied","Data":"9a98ed559da3cedf9683e385596d8a4e49eeacf36603ce804050bcae36f0a0b1"} Jan 05 23:16:28 crc kubenswrapper[5034]: I0105 23:16:28.310222 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhgpz" event={"ID":"913b5995-307c-4979-a7a5-56e8c3040e40","Type":"ContainerStarted","Data":"750a5ca0828472b5d82d971c7c84eabd5cc6e2784961cb3db6d5413bbefd51ee"} Jan 05 23:16:28 crc kubenswrapper[5034]: I0105 23:16:28.333179 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dhgpz" podStartSLOduration=1.863472199 podStartE2EDuration="4.333145966s" podCreationTimestamp="2026-01-05 23:16:24 +0000 UTC" firstStartedPulling="2026-01-05 23:16:25.274498811 +0000 UTC m=+5077.646498250" lastFinishedPulling="2026-01-05 23:16:27.744172568 +0000 UTC m=+5080.116172017" observedRunningTime="2026-01-05 23:16:28.330325896 +0000 UTC m=+5080.702325345" watchObservedRunningTime="2026-01-05 23:16:28.333145966 +0000 UTC m=+5080.705145405" Jan 05 23:16:33 crc kubenswrapper[5034]: I0105 23:16:33.838940 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:16:33 crc kubenswrapper[5034]: E0105 23:16:33.840169 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:16:34 crc kubenswrapper[5034]: I0105 23:16:34.476587 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:34 crc kubenswrapper[5034]: I0105 23:16:34.476677 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:34 crc kubenswrapper[5034]: I0105 23:16:34.528374 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:35 crc kubenswrapper[5034]: I0105 23:16:35.457771 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:35 crc kubenswrapper[5034]: I0105 23:16:35.518002 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhgpz"] Jan 05 23:16:37 crc kubenswrapper[5034]: I0105 23:16:37.393650 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dhgpz" podUID="913b5995-307c-4979-a7a5-56e8c3040e40" containerName="registry-server" containerID="cri-o://750a5ca0828472b5d82d971c7c84eabd5cc6e2784961cb3db6d5413bbefd51ee" gracePeriod=2 Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.404449 5034 generic.go:334] "Generic (PLEG): container finished" podID="913b5995-307c-4979-a7a5-56e8c3040e40" containerID="750a5ca0828472b5d82d971c7c84eabd5cc6e2784961cb3db6d5413bbefd51ee" exitCode=0 Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.404674 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhgpz" event={"ID":"913b5995-307c-4979-a7a5-56e8c3040e40","Type":"ContainerDied","Data":"750a5ca0828472b5d82d971c7c84eabd5cc6e2784961cb3db6d5413bbefd51ee"} Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.404935 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhgpz" event={"ID":"913b5995-307c-4979-a7a5-56e8c3040e40","Type":"ContainerDied","Data":"934738ff3ab3c1a19d00e3de4817f1e2ef6b326153d40ebddfc244b1b31a4faa"} Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.404958 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="934738ff3ab3c1a19d00e3de4817f1e2ef6b326153d40ebddfc244b1b31a4faa" Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.419007 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.469320 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-utilities\") pod \"913b5995-307c-4979-a7a5-56e8c3040e40\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.469395 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-catalog-content\") pod \"913b5995-307c-4979-a7a5-56e8c3040e40\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.469456 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcmlc\" (UniqueName: \"kubernetes.io/projected/913b5995-307c-4979-a7a5-56e8c3040e40-kube-api-access-fcmlc\") pod \"913b5995-307c-4979-a7a5-56e8c3040e40\" (UID: \"913b5995-307c-4979-a7a5-56e8c3040e40\") " Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.470594 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-utilities" (OuterVolumeSpecName: "utilities") pod "913b5995-307c-4979-a7a5-56e8c3040e40" (UID: "913b5995-307c-4979-a7a5-56e8c3040e40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.478134 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913b5995-307c-4979-a7a5-56e8c3040e40-kube-api-access-fcmlc" (OuterVolumeSpecName: "kube-api-access-fcmlc") pod "913b5995-307c-4979-a7a5-56e8c3040e40" (UID: "913b5995-307c-4979-a7a5-56e8c3040e40"). InnerVolumeSpecName "kube-api-access-fcmlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.528222 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "913b5995-307c-4979-a7a5-56e8c3040e40" (UID: "913b5995-307c-4979-a7a5-56e8c3040e40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.571030 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.571367 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/913b5995-307c-4979-a7a5-56e8c3040e40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:16:38 crc kubenswrapper[5034]: I0105 23:16:38.571384 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcmlc\" (UniqueName: \"kubernetes.io/projected/913b5995-307c-4979-a7a5-56e8c3040e40-kube-api-access-fcmlc\") on node \"crc\" DevicePath \"\"" Jan 05 23:16:39 crc kubenswrapper[5034]: I0105 23:16:39.412319 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhgpz" Jan 05 23:16:39 crc kubenswrapper[5034]: I0105 23:16:39.448743 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhgpz"] Jan 05 23:16:39 crc kubenswrapper[5034]: I0105 23:16:39.460307 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dhgpz"] Jan 05 23:16:39 crc kubenswrapper[5034]: I0105 23:16:39.849153 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913b5995-307c-4979-a7a5-56e8c3040e40" path="/var/lib/kubelet/pods/913b5995-307c-4979-a7a5-56e8c3040e40/volumes" Jan 05 23:16:44 crc kubenswrapper[5034]: I0105 23:16:44.838371 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:16:44 crc kubenswrapper[5034]: E0105 23:16:44.839342 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:16:59 crc kubenswrapper[5034]: I0105 23:16:59.838892 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:16:59 crc kubenswrapper[5034]: E0105 23:16:59.839957 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:17:12 crc kubenswrapper[5034]: I0105 23:17:12.838938 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:17:12 crc kubenswrapper[5034]: E0105 23:17:12.841616 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:17:24 crc kubenswrapper[5034]: I0105 23:17:24.844346 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:17:24 crc kubenswrapper[5034]: E0105 23:17:24.845118 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:17:36 crc kubenswrapper[5034]: I0105 23:17:36.838627 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:17:36 crc kubenswrapper[5034]: E0105 23:17:36.839395 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:17:49 crc kubenswrapper[5034]: I0105 23:17:49.838127 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:17:49 crc kubenswrapper[5034]: E0105 23:17:49.839286 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:17:58 crc kubenswrapper[5034]: I0105 23:17:58.521899 5034 scope.go:117] "RemoveContainer" containerID="987fa31a588c935ad9e2e1a5a0769d72077787cf5267d40455792130363e866c" Jan 05 23:18:03 crc kubenswrapper[5034]: I0105 23:18:03.838765 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:18:03 crc kubenswrapper[5034]: E0105 23:18:03.839599 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:18:17 crc kubenswrapper[5034]: I0105 23:18:17.846353 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:18:17 crc kubenswrapper[5034]: E0105 23:18:17.847580 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:18:29 crc kubenswrapper[5034]: I0105 23:18:29.840070 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:18:29 crc kubenswrapper[5034]: E0105 23:18:29.840903 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:18:41 crc kubenswrapper[5034]: I0105 23:18:41.839338 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:18:41 crc kubenswrapper[5034]: E0105 23:18:41.840384 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:18:53 crc kubenswrapper[5034]: I0105 23:18:53.838827 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:18:53 crc kubenswrapper[5034]: E0105 23:18:53.839871 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:19:06 crc kubenswrapper[5034]: I0105 23:19:06.838836 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:19:06 crc kubenswrapper[5034]: E0105 23:19:06.839600 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.776971 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 05 23:19:19 crc kubenswrapper[5034]: E0105 23:19:19.777915 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913b5995-307c-4979-a7a5-56e8c3040e40" containerName="registry-server" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.777935 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="913b5995-307c-4979-a7a5-56e8c3040e40" containerName="registry-server" Jan 05 23:19:19 crc kubenswrapper[5034]: E0105 23:19:19.777946 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913b5995-307c-4979-a7a5-56e8c3040e40" containerName="extract-utilities" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.777953 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="913b5995-307c-4979-a7a5-56e8c3040e40" containerName="extract-utilities" Jan 05 23:19:19 crc kubenswrapper[5034]: E0105 23:19:19.777963 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913b5995-307c-4979-a7a5-56e8c3040e40" containerName="extract-content" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.777969 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="913b5995-307c-4979-a7a5-56e8c3040e40" containerName="extract-content" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.778146 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="913b5995-307c-4979-a7a5-56e8c3040e40" containerName="registry-server" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.779036 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.782713 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-htnfg" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.791772 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.838839 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:19:19 crc kubenswrapper[5034]: E0105 23:19:19.839383 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.887239 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e61552dc-052c-4f88-944c-2635063091ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e61552dc-052c-4f88-944c-2635063091ac\") pod \"mariadb-copy-data\" (UID: \"34738e72-bd43-492f-be85-d38dffc26db8\") " pod="openstack/mariadb-copy-data" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.887381 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmkfh\" (UniqueName: \"kubernetes.io/projected/34738e72-bd43-492f-be85-d38dffc26db8-kube-api-access-wmkfh\") pod \"mariadb-copy-data\" (UID: \"34738e72-bd43-492f-be85-d38dffc26db8\") " pod="openstack/mariadb-copy-data" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.989576 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmkfh\" (UniqueName: \"kubernetes.io/projected/34738e72-bd43-492f-be85-d38dffc26db8-kube-api-access-wmkfh\") pod \"mariadb-copy-data\" (UID: \"34738e72-bd43-492f-be85-d38dffc26db8\") " pod="openstack/mariadb-copy-data" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.989871 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e61552dc-052c-4f88-944c-2635063091ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e61552dc-052c-4f88-944c-2635063091ac\") pod \"mariadb-copy-data\" (UID: \"34738e72-bd43-492f-be85-d38dffc26db8\") " pod="openstack/mariadb-copy-data" Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.994291 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:19:19 crc kubenswrapper[5034]: I0105 23:19:19.994337 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e61552dc-052c-4f88-944c-2635063091ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e61552dc-052c-4f88-944c-2635063091ac\") pod \"mariadb-copy-data\" (UID: \"34738e72-bd43-492f-be85-d38dffc26db8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/462c19f6e094ff99d3fbaf0c784335eada555430948aba69d0d6e5e203eefda2/globalmount\"" pod="openstack/mariadb-copy-data" Jan 05 23:19:20 crc kubenswrapper[5034]: I0105 23:19:20.014652 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmkfh\" (UniqueName: \"kubernetes.io/projected/34738e72-bd43-492f-be85-d38dffc26db8-kube-api-access-wmkfh\") pod \"mariadb-copy-data\" (UID: \"34738e72-bd43-492f-be85-d38dffc26db8\") " pod="openstack/mariadb-copy-data" Jan 05 23:19:20 crc kubenswrapper[5034]: I0105 23:19:20.031905 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e61552dc-052c-4f88-944c-2635063091ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e61552dc-052c-4f88-944c-2635063091ac\") pod \"mariadb-copy-data\" (UID: \"34738e72-bd43-492f-be85-d38dffc26db8\") " pod="openstack/mariadb-copy-data" Jan 05 23:19:20 crc kubenswrapper[5034]: I0105 23:19:20.108632 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 05 23:19:20 crc kubenswrapper[5034]: I0105 23:19:20.609805 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 05 23:19:20 crc kubenswrapper[5034]: I0105 23:19:20.779291 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"34738e72-bd43-492f-be85-d38dffc26db8","Type":"ContainerStarted","Data":"7dc179c8c4b9943795aa00982030154ede5995d65f34f7e1cab93cf7ddbc6398"} Jan 05 23:19:21 crc kubenswrapper[5034]: I0105 23:19:21.792417 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"34738e72-bd43-492f-be85-d38dffc26db8","Type":"ContainerStarted","Data":"ca3052f371def4f1dcb512014abcfbeb45e60fa3acdb3b424a16ff4e0a6f3d1e"} Jan 05 23:19:21 crc kubenswrapper[5034]: I0105 23:19:21.816281 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.816260642 podStartE2EDuration="3.816260642s" podCreationTimestamp="2026-01-05 23:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:19:21.812559207 +0000 UTC m=+5254.184558646" watchObservedRunningTime="2026-01-05 23:19:21.816260642 +0000 UTC m=+5254.188260081" Jan 05 23:19:25 crc kubenswrapper[5034]: I0105 23:19:25.161541 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 05 23:19:25 crc kubenswrapper[5034]: I0105 23:19:25.163467 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 05 23:19:25 crc kubenswrapper[5034]: I0105 23:19:25.168504 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 05 23:19:25 crc kubenswrapper[5034]: I0105 23:19:25.295400 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj59f\" (UniqueName: \"kubernetes.io/projected/9c6ecdbd-5bd9-44b1-ad31-15bd412051e1-kube-api-access-jj59f\") pod \"mariadb-client\" (UID: \"9c6ecdbd-5bd9-44b1-ad31-15bd412051e1\") " pod="openstack/mariadb-client" Jan 05 23:19:25 crc kubenswrapper[5034]: I0105 23:19:25.397349 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj59f\" (UniqueName: \"kubernetes.io/projected/9c6ecdbd-5bd9-44b1-ad31-15bd412051e1-kube-api-access-jj59f\") pod \"mariadb-client\" (UID: \"9c6ecdbd-5bd9-44b1-ad31-15bd412051e1\") " pod="openstack/mariadb-client" Jan 05 23:19:25 crc kubenswrapper[5034]: I0105 23:19:25.420582 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj59f\" (UniqueName: \"kubernetes.io/projected/9c6ecdbd-5bd9-44b1-ad31-15bd412051e1-kube-api-access-jj59f\") pod \"mariadb-client\" (UID: \"9c6ecdbd-5bd9-44b1-ad31-15bd412051e1\") " pod="openstack/mariadb-client" Jan 05 23:19:25 crc kubenswrapper[5034]: I0105 23:19:25.516742 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 05 23:19:25 crc kubenswrapper[5034]: I0105 23:19:25.756330 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 05 23:19:25 crc kubenswrapper[5034]: I0105 23:19:25.846668 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9c6ecdbd-5bd9-44b1-ad31-15bd412051e1","Type":"ContainerStarted","Data":"ae7c2a4fb89a7019b76ca6c8ce091fa75d24f17c8fd1b4b3db35857107bf4648"} Jan 05 23:19:26 crc kubenswrapper[5034]: I0105 23:19:26.851515 5034 generic.go:334] "Generic (PLEG): container finished" podID="9c6ecdbd-5bd9-44b1-ad31-15bd412051e1" containerID="c4531d68dfa7d1f7116ac9c80d7a3f6c9fb7483135410687704c708b3a69b7b3" exitCode=0 Jan 05 23:19:26 crc kubenswrapper[5034]: I0105 23:19:26.851594 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9c6ecdbd-5bd9-44b1-ad31-15bd412051e1","Type":"ContainerDied","Data":"c4531d68dfa7d1f7116ac9c80d7a3f6c9fb7483135410687704c708b3a69b7b3"} Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.155798 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.186841 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_9c6ecdbd-5bd9-44b1-ad31-15bd412051e1/mariadb-client/0.log" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.215286 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.220828 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.248813 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj59f\" (UniqueName: \"kubernetes.io/projected/9c6ecdbd-5bd9-44b1-ad31-15bd412051e1-kube-api-access-jj59f\") pod \"9c6ecdbd-5bd9-44b1-ad31-15bd412051e1\" (UID: \"9c6ecdbd-5bd9-44b1-ad31-15bd412051e1\") " Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.255885 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6ecdbd-5bd9-44b1-ad31-15bd412051e1-kube-api-access-jj59f" (OuterVolumeSpecName: "kube-api-access-jj59f") pod "9c6ecdbd-5bd9-44b1-ad31-15bd412051e1" (UID: "9c6ecdbd-5bd9-44b1-ad31-15bd412051e1"). InnerVolumeSpecName "kube-api-access-jj59f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.351431 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj59f\" (UniqueName: \"kubernetes.io/projected/9c6ecdbd-5bd9-44b1-ad31-15bd412051e1-kube-api-access-jj59f\") on node \"crc\" DevicePath \"\"" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.419181 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 05 23:19:28 crc kubenswrapper[5034]: E0105 23:19:28.419696 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6ecdbd-5bd9-44b1-ad31-15bd412051e1" containerName="mariadb-client" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.419713 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6ecdbd-5bd9-44b1-ad31-15bd412051e1" containerName="mariadb-client" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.419868 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6ecdbd-5bd9-44b1-ad31-15bd412051e1" containerName="mariadb-client" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.420463 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.428703 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.554950 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srjr9\" (UniqueName: \"kubernetes.io/projected/79510a66-0050-4aee-a855-b8b8f7bd38fd-kube-api-access-srjr9\") pod \"mariadb-client\" (UID: \"79510a66-0050-4aee-a855-b8b8f7bd38fd\") " pod="openstack/mariadb-client" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.656728 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srjr9\" (UniqueName: \"kubernetes.io/projected/79510a66-0050-4aee-a855-b8b8f7bd38fd-kube-api-access-srjr9\") pod \"mariadb-client\" (UID: \"79510a66-0050-4aee-a855-b8b8f7bd38fd\") " pod="openstack/mariadb-client" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.672266 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srjr9\" (UniqueName: \"kubernetes.io/projected/79510a66-0050-4aee-a855-b8b8f7bd38fd-kube-api-access-srjr9\") pod \"mariadb-client\" (UID: \"79510a66-0050-4aee-a855-b8b8f7bd38fd\") " pod="openstack/mariadb-client" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.740892 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.872538 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae7c2a4fb89a7019b76ca6c8ce091fa75d24f17c8fd1b4b3db35857107bf4648" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.872767 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 05 23:19:28 crc kubenswrapper[5034]: I0105 23:19:28.893437 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="9c6ecdbd-5bd9-44b1-ad31-15bd412051e1" podUID="79510a66-0050-4aee-a855-b8b8f7bd38fd" Jan 05 23:19:29 crc kubenswrapper[5034]: I0105 23:19:29.138125 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 05 23:19:29 crc kubenswrapper[5034]: W0105 23:19:29.141055 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79510a66_0050_4aee_a855_b8b8f7bd38fd.slice/crio-c760b94dbafe0b1c3630ea6d85281d7643d0ef36eb588df40abd4954b1d54af3 WatchSource:0}: Error finding container c760b94dbafe0b1c3630ea6d85281d7643d0ef36eb588df40abd4954b1d54af3: Status 404 returned error can't find the container with id c760b94dbafe0b1c3630ea6d85281d7643d0ef36eb588df40abd4954b1d54af3 Jan 05 23:19:29 crc kubenswrapper[5034]: I0105 23:19:29.847178 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c6ecdbd-5bd9-44b1-ad31-15bd412051e1" path="/var/lib/kubelet/pods/9c6ecdbd-5bd9-44b1-ad31-15bd412051e1/volumes" Jan 05 23:19:29 crc kubenswrapper[5034]: I0105 23:19:29.889445 5034 generic.go:334] "Generic (PLEG): container finished" podID="79510a66-0050-4aee-a855-b8b8f7bd38fd" containerID="be4cb2fa633803cdc48ce0f5b8c719d38444de82152109623bedfb3901442ee8" exitCode=0 Jan 05 23:19:29 crc kubenswrapper[5034]: I0105 23:19:29.889489 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"79510a66-0050-4aee-a855-b8b8f7bd38fd","Type":"ContainerDied","Data":"be4cb2fa633803cdc48ce0f5b8c719d38444de82152109623bedfb3901442ee8"} Jan 05 23:19:29 crc kubenswrapper[5034]: I0105 23:19:29.889518 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"79510a66-0050-4aee-a855-b8b8f7bd38fd","Type":"ContainerStarted","Data":"c760b94dbafe0b1c3630ea6d85281d7643d0ef36eb588df40abd4954b1d54af3"} Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.343892 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.366756 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_79510a66-0050-4aee-a855-b8b8f7bd38fd/mariadb-client/0.log" Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.402023 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.409816 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.509577 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srjr9\" (UniqueName: \"kubernetes.io/projected/79510a66-0050-4aee-a855-b8b8f7bd38fd-kube-api-access-srjr9\") pod \"79510a66-0050-4aee-a855-b8b8f7bd38fd\" (UID: \"79510a66-0050-4aee-a855-b8b8f7bd38fd\") " Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.516227 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79510a66-0050-4aee-a855-b8b8f7bd38fd-kube-api-access-srjr9" (OuterVolumeSpecName: "kube-api-access-srjr9") pod "79510a66-0050-4aee-a855-b8b8f7bd38fd" (UID: "79510a66-0050-4aee-a855-b8b8f7bd38fd"). InnerVolumeSpecName "kube-api-access-srjr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.612740 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srjr9\" (UniqueName: \"kubernetes.io/projected/79510a66-0050-4aee-a855-b8b8f7bd38fd-kube-api-access-srjr9\") on node \"crc\" DevicePath \"\"" Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.840014 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.852306 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79510a66-0050-4aee-a855-b8b8f7bd38fd" path="/var/lib/kubelet/pods/79510a66-0050-4aee-a855-b8b8f7bd38fd/volumes" Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.911312 5034 scope.go:117] "RemoveContainer" containerID="be4cb2fa633803cdc48ce0f5b8c719d38444de82152109623bedfb3901442ee8" Jan 05 23:19:31 crc kubenswrapper[5034]: I0105 23:19:31.911581 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 05 23:19:32 crc kubenswrapper[5034]: I0105 23:19:32.921065 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"f74e12195f4a4167262b994444528ef434816fe2ad847b01f18b8380f45c84a2"} Jan 05 23:19:58 crc kubenswrapper[5034]: I0105 23:19:58.612810 5034 scope.go:117] "RemoveContainer" containerID="3f9260572a0dabc63a9c80f88bb9358c0a611671a33ba8160d57df0d8e07696f" Jan 05 23:19:58 crc kubenswrapper[5034]: I0105 23:19:58.642676 5034 scope.go:117] "RemoveContainer" containerID="1be2f0af685533a4b1aae84e9a7620559fe3b964cf1919e526d4822702d03b6f" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.754563 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 23:20:03 crc kubenswrapper[5034]: E0105 23:20:03.755395 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79510a66-0050-4aee-a855-b8b8f7bd38fd" containerName="mariadb-client" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.755410 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="79510a66-0050-4aee-a855-b8b8f7bd38fd" containerName="mariadb-client" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.755591 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="79510a66-0050-4aee-a855-b8b8f7bd38fd" containerName="mariadb-client" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.756759 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.759005 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.759500 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-22pvq" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.759905 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.760069 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.760255 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.779185 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.787702 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.789320 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.810146 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.811625 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.819607 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.851570 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.889770 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/014bb009-3fd2-42e1-b51e-21437f54d5d4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.889814 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014bb009-3fd2-42e1-b51e-21437f54d5d4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.889841 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/014bb009-3fd2-42e1-b51e-21437f54d5d4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.889866 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/014bb009-3fd2-42e1-b51e-21437f54d5d4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.889910 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/014bb009-3fd2-42e1-b51e-21437f54d5d4-config\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.889923 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw4sb\" (UniqueName: \"kubernetes.io/projected/014bb009-3fd2-42e1-b51e-21437f54d5d4-kube-api-access-kw4sb\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.890018 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/014bb009-3fd2-42e1-b51e-21437f54d5d4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.890103 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c11ad99-d4cd-44f7-8264-9a84503bb09c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c11ad99-d4cd-44f7-8264-9a84503bb09c\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.991740 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9ad35689-1b4f-4fea-8949-1dfab421f634\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ad35689-1b4f-4fea-8949-1dfab421f634\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.991797 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16add7fd-19c1-4795-b9ba-f4e692b65fb6-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.991831 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/16add7fd-19c1-4795-b9ba-f4e692b65fb6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.991856 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.991974 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/014bb009-3fd2-42e1-b51e-21437f54d5d4-config\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992028 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw4sb\" (UniqueName: \"kubernetes.io/projected/014bb009-3fd2-42e1-b51e-21437f54d5d4-kube-api-access-kw4sb\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992129 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992302 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992338 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzc7d\" (UniqueName: \"kubernetes.io/projected/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-kube-api-access-kzc7d\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992378 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/16add7fd-19c1-4795-b9ba-f4e692b65fb6-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992403 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/014bb009-3fd2-42e1-b51e-21437f54d5d4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992505 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992567 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzsbj\" (UniqueName: \"kubernetes.io/projected/16add7fd-19c1-4795-b9ba-f4e692b65fb6-kube-api-access-vzsbj\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992637 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/16add7fd-19c1-4795-b9ba-f4e692b65fb6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992666 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-config\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992711 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992755 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c11ad99-d4cd-44f7-8264-9a84503bb09c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c11ad99-d4cd-44f7-8264-9a84503bb09c\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992828 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/014bb009-3fd2-42e1-b51e-21437f54d5d4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992849 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16add7fd-19c1-4795-b9ba-f4e692b65fb6-config\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992894 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014bb009-3fd2-42e1-b51e-21437f54d5d4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.992943 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/014bb009-3fd2-42e1-b51e-21437f54d5d4-config\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.993162 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16add7fd-19c1-4795-b9ba-f4e692b65fb6-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.993230 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/014bb009-3fd2-42e1-b51e-21437f54d5d4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.993274 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/014bb009-3fd2-42e1-b51e-21437f54d5d4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.993304 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-777decce-1e40-4a1a-9003-94c54b13dd0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-777decce-1e40-4a1a-9003-94c54b13dd0a\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.993367 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/014bb009-3fd2-42e1-b51e-21437f54d5d4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.993687 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/014bb009-3fd2-42e1-b51e-21437f54d5d4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.996398 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.996672 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c11ad99-d4cd-44f7-8264-9a84503bb09c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c11ad99-d4cd-44f7-8264-9a84503bb09c\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7034c871e9b43060921bc9b9ea20a1d2aeac231d9217be32d55c3b2283d55773/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:03 crc kubenswrapper[5034]: I0105 23:20:03.999858 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/014bb009-3fd2-42e1-b51e-21437f54d5d4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.001468 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/014bb009-3fd2-42e1-b51e-21437f54d5d4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.011731 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014bb009-3fd2-42e1-b51e-21437f54d5d4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.020543 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw4sb\" (UniqueName: \"kubernetes.io/projected/014bb009-3fd2-42e1-b51e-21437f54d5d4-kube-api-access-kw4sb\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.024355 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c11ad99-d4cd-44f7-8264-9a84503bb09c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c11ad99-d4cd-44f7-8264-9a84503bb09c\") pod \"ovsdbserver-sb-0\" (UID: \"014bb009-3fd2-42e1-b51e-21437f54d5d4\") " pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.087550 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.095784 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16add7fd-19c1-4795-b9ba-f4e692b65fb6-config\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.095836 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16add7fd-19c1-4795-b9ba-f4e692b65fb6-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.095866 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-777decce-1e40-4a1a-9003-94c54b13dd0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-777decce-1e40-4a1a-9003-94c54b13dd0a\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.095897 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9ad35689-1b4f-4fea-8949-1dfab421f634\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ad35689-1b4f-4fea-8949-1dfab421f634\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.095927 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16add7fd-19c1-4795-b9ba-f4e692b65fb6-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.095957 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/16add7fd-19c1-4795-b9ba-f4e692b65fb6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.095975 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.096000 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.096017 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.096069 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzc7d\" (UniqueName: \"kubernetes.io/projected/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-kube-api-access-kzc7d\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.096139 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/16add7fd-19c1-4795-b9ba-f4e692b65fb6-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.096165 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.096194 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzsbj\" (UniqueName: \"kubernetes.io/projected/16add7fd-19c1-4795-b9ba-f4e692b65fb6-kube-api-access-vzsbj\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.096217 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/16add7fd-19c1-4795-b9ba-f4e692b65fb6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.096232 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-config\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.096255 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.097000 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16add7fd-19c1-4795-b9ba-f4e692b65fb6-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.097646 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16add7fd-19c1-4795-b9ba-f4e692b65fb6-config\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.098236 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.098605 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/16add7fd-19c1-4795-b9ba-f4e692b65fb6-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.100184 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.100215 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9ad35689-1b4f-4fea-8949-1dfab421f634\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ad35689-1b4f-4fea-8949-1dfab421f634\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1c63f1cb4eab5cde52f458548c23d01ba4d1940fe24f32d6961e3cbc205206ac/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.100273 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.100314 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-777decce-1e40-4a1a-9003-94c54b13dd0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-777decce-1e40-4a1a-9003-94c54b13dd0a\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fb0d3b749a82d2992c51a7216c53acba9df9cd324bfe7d8ecde90a6cba8e3cc0/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.100536 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-config\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.102488 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.104012 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/16add7fd-19c1-4795-b9ba-f4e692b65fb6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.104133 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16add7fd-19c1-4795-b9ba-f4e692b65fb6-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.107840 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.109030 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.109404 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.114103 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/16add7fd-19c1-4795-b9ba-f4e692b65fb6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.120286 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzsbj\" (UniqueName: \"kubernetes.io/projected/16add7fd-19c1-4795-b9ba-f4e692b65fb6-kube-api-access-vzsbj\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.120791 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzc7d\" (UniqueName: \"kubernetes.io/projected/dcce2e6b-c1ba-4d6b-a972-96f864bd3468-kube-api-access-kzc7d\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.139210 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9ad35689-1b4f-4fea-8949-1dfab421f634\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ad35689-1b4f-4fea-8949-1dfab421f634\") pod \"ovsdbserver-sb-2\" (UID: \"16add7fd-19c1-4795-b9ba-f4e692b65fb6\") " pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.146743 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-777decce-1e40-4a1a-9003-94c54b13dd0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-777decce-1e40-4a1a-9003-94c54b13dd0a\") pod \"ovsdbserver-sb-1\" (UID: \"dcce2e6b-c1ba-4d6b-a972-96f864bd3468\") " pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.384124 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.387453 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.391708 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-s4pxx" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.392010 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.392164 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.392534 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.406788 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.409267 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.409962 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.420834 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.427702 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.437504 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.439242 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.443339 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.450180 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.503734 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nll9w\" (UniqueName: \"kubernetes.io/projected/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-kube-api-access-nll9w\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.503855 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8908e22b-933f-43ef-a88d-058610136209-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.503891 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8908e22b-933f-43ef-a88d-058610136209-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.503930 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.503948 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504207 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504268 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504302 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8908e22b-933f-43ef-a88d-058610136209-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504384 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-config\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504424 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8908e22b-933f-43ef-a88d-058610136209-config\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504471 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504531 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h749g\" (UniqueName: \"kubernetes.io/projected/8908e22b-933f-43ef-a88d-058610136209-kube-api-access-h749g\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504565 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8908e22b-933f-43ef-a88d-058610136209-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504696 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8908e22b-933f-43ef-a88d-058610136209-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504772 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2dec0b19-850a-4c0f-82d3-ccdc991e5094\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dec0b19-850a-4c0f-82d3-ccdc991e5094\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.504932 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7d6428e-9dc7-4a96-b247-5b61db448a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7d6428e-9dc7-4a96-b247-5b61db448a28\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607394 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8908e22b-933f-43ef-a88d-058610136209-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607461 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687b551c-0fde-402a-abf5-076a664acdac-config\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607491 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2dec0b19-850a-4c0f-82d3-ccdc991e5094\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dec0b19-850a-4c0f-82d3-ccdc991e5094\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607514 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5f4cb8f2-d79c-4ce9-98d7-a160dcf43bec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f4cb8f2-d79c-4ce9-98d7-a160dcf43bec\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607579 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7d6428e-9dc7-4a96-b247-5b61db448a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7d6428e-9dc7-4a96-b247-5b61db448a28\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607602 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nll9w\" (UniqueName: \"kubernetes.io/projected/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-kube-api-access-nll9w\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607630 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/687b551c-0fde-402a-abf5-076a664acdac-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607662 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b551c-0fde-402a-abf5-076a664acdac-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607682 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8908e22b-933f-43ef-a88d-058610136209-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607709 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8908e22b-933f-43ef-a88d-058610136209-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607733 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5l89\" (UniqueName: \"kubernetes.io/projected/687b551c-0fde-402a-abf5-076a664acdac-kube-api-access-r5l89\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607785 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607805 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b551c-0fde-402a-abf5-076a664acdac-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607824 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607848 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607865 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607882 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8908e22b-933f-43ef-a88d-058610136209-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607904 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687b551c-0fde-402a-abf5-076a664acdac-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607926 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-config\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607941 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8908e22b-933f-43ef-a88d-058610136209-config\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.607976 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.608045 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h749g\" (UniqueName: \"kubernetes.io/projected/8908e22b-933f-43ef-a88d-058610136209-kube-api-access-h749g\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.608131 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687b551c-0fde-402a-abf5-076a664acdac-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.608155 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8908e22b-933f-43ef-a88d-058610136209-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.609845 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.610423 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8908e22b-933f-43ef-a88d-058610136209-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.610721 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-config\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.611339 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8908e22b-933f-43ef-a88d-058610136209-config\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.611391 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.612952 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8908e22b-933f-43ef-a88d-058610136209-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.616677 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.616748 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2dec0b19-850a-4c0f-82d3-ccdc991e5094\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dec0b19-850a-4c0f-82d3-ccdc991e5094\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1322b7029911cc25206db2516279f45fd792a471858875baff4b480fd0517439/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.617425 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.618654 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7d6428e-9dc7-4a96-b247-5b61db448a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7d6428e-9dc7-4a96-b247-5b61db448a28\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dbc6941a8e8eb605c9b8d153247bf4adca7df5d23c01a50285dd80915dc48243/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.618275 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.620606 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.620665 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.620944 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8908e22b-933f-43ef-a88d-058610136209-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.623210 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8908e22b-933f-43ef-a88d-058610136209-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.623830 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.624693 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8908e22b-933f-43ef-a88d-058610136209-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.628851 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h749g\" (UniqueName: \"kubernetes.io/projected/8908e22b-933f-43ef-a88d-058610136209-kube-api-access-h749g\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.630795 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nll9w\" (UniqueName: \"kubernetes.io/projected/26e0a32c-15a9-48e9-9ecc-97bfdbcc923e-kube-api-access-nll9w\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.666685 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7d6428e-9dc7-4a96-b247-5b61db448a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7d6428e-9dc7-4a96-b247-5b61db448a28\") pod \"ovsdbserver-nb-0\" (UID: \"8908e22b-933f-43ef-a88d-058610136209\") " pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.679589 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2dec0b19-850a-4c0f-82d3-ccdc991e5094\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dec0b19-850a-4c0f-82d3-ccdc991e5094\") pod \"ovsdbserver-nb-1\" (UID: \"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e\") " pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.709744 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687b551c-0fde-402a-abf5-076a664acdac-config\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.709826 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5f4cb8f2-d79c-4ce9-98d7-a160dcf43bec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f4cb8f2-d79c-4ce9-98d7-a160dcf43bec\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.709910 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/687b551c-0fde-402a-abf5-076a664acdac-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.709946 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b551c-0fde-402a-abf5-076a664acdac-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.709985 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5l89\" (UniqueName: \"kubernetes.io/projected/687b551c-0fde-402a-abf5-076a664acdac-kube-api-access-r5l89\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.710015 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b551c-0fde-402a-abf5-076a664acdac-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.710042 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687b551c-0fde-402a-abf5-076a664acdac-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.710089 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687b551c-0fde-402a-abf5-076a664acdac-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.710759 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687b551c-0fde-402a-abf5-076a664acdac-config\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.711255 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687b551c-0fde-402a-abf5-076a664acdac-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.712377 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/687b551c-0fde-402a-abf5-076a664acdac-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.715292 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687b551c-0fde-402a-abf5-076a664acdac-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.716204 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.716339 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5f4cb8f2-d79c-4ce9-98d7-a160dcf43bec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f4cb8f2-d79c-4ce9-98d7-a160dcf43bec\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2dd1989a31f03750fbe2f31209dd3971da15478b1c3cb8b36e55b345a16ecb8f/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.718041 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.719167 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b551c-0fde-402a-abf5-076a664acdac-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.721238 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b551c-0fde-402a-abf5-076a664acdac-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.738868 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5l89\" (UniqueName: \"kubernetes.io/projected/687b551c-0fde-402a-abf5-076a664acdac-kube-api-access-r5l89\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.757205 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5f4cb8f2-d79c-4ce9-98d7-a160dcf43bec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5f4cb8f2-d79c-4ce9-98d7-a160dcf43bec\") pod \"ovsdbserver-nb-2\" (UID: \"687b551c-0fde-402a-abf5-076a664acdac\") " pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.777591 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.791413 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:04 crc kubenswrapper[5034]: I0105 23:20:04.975381 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 05 23:20:05 crc kubenswrapper[5034]: I0105 23:20:05.084867 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 05 23:20:05 crc kubenswrapper[5034]: I0105 23:20:05.224037 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 23:20:05 crc kubenswrapper[5034]: W0105 23:20:05.236430 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8908e22b_933f_43ef_a88d_058610136209.slice/crio-af251e04db68e0c2b64b1a8041154a24146c595dad3a4cd1dee1b97661cee0b6 WatchSource:0}: Error finding container af251e04db68e0c2b64b1a8041154a24146c595dad3a4cd1dee1b97661cee0b6: Status 404 returned error can't find the container with id af251e04db68e0c2b64b1a8041154a24146c595dad3a4cd1dee1b97661cee0b6 Jan 05 23:20:05 crc kubenswrapper[5034]: I0105 23:20:05.288227 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"16add7fd-19c1-4795-b9ba-f4e692b65fb6","Type":"ContainerStarted","Data":"b72a459713244c2272fb77e3449dc6a39bda4a6cb281bf58413d164f41991b9b"} Jan 05 23:20:05 crc kubenswrapper[5034]: I0105 23:20:05.290597 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"014bb009-3fd2-42e1-b51e-21437f54d5d4","Type":"ContainerStarted","Data":"44e4f4ffa556938044af2ede6c72cfab7eebe911014e4f17135a857e0354564a"} Jan 05 23:20:05 crc kubenswrapper[5034]: I0105 23:20:05.290627 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"014bb009-3fd2-42e1-b51e-21437f54d5d4","Type":"ContainerStarted","Data":"7c3372c98f266e44984fa3734cd22f4b635b8dbe8fcc4a6812ec0504ebee9b23"} Jan 05 23:20:05 crc kubenswrapper[5034]: I0105 23:20:05.290648 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"014bb009-3fd2-42e1-b51e-21437f54d5d4","Type":"ContainerStarted","Data":"f5d587fb2fc9afe94704fd80f8a4ce61ddb115a9e000f49e259c57c282422378"} Jan 05 23:20:05 crc kubenswrapper[5034]: I0105 23:20:05.296601 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"dcce2e6b-c1ba-4d6b-a972-96f864bd3468","Type":"ContainerStarted","Data":"46faa89ee3edaa9690f5b2ed93e38cc81e34978299dcc73fd4da0fb598ebdb82"} Jan 05 23:20:05 crc kubenswrapper[5034]: I0105 23:20:05.324003 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.323975087 podStartE2EDuration="3.323975087s" podCreationTimestamp="2026-01-05 23:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:05.320030325 +0000 UTC m=+5297.692029784" watchObservedRunningTime="2026-01-05 23:20:05.323975087 +0000 UTC m=+5297.695974536" Jan 05 23:20:05 crc kubenswrapper[5034]: I0105 23:20:05.434742 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.247638 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 05 23:20:06 crc kubenswrapper[5034]: W0105 23:20:06.252482 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod687b551c_0fde_402a_abf5_076a664acdac.slice/crio-370e2f203255b2e53c8592cb68645d7e5e6d8aa3d63f2ed2ccc329c5a00e5c1e WatchSource:0}: Error finding container 370e2f203255b2e53c8592cb68645d7e5e6d8aa3d63f2ed2ccc329c5a00e5c1e: Status 404 returned error can't find the container with id 370e2f203255b2e53c8592cb68645d7e5e6d8aa3d63f2ed2ccc329c5a00e5c1e Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.307614 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"687b551c-0fde-402a-abf5-076a664acdac","Type":"ContainerStarted","Data":"370e2f203255b2e53c8592cb68645d7e5e6d8aa3d63f2ed2ccc329c5a00e5c1e"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.309889 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"dcce2e6b-c1ba-4d6b-a972-96f864bd3468","Type":"ContainerStarted","Data":"1eaa1fb2fcf88f2bff73e36bfe308b097c92ca25a4dc0e9c60306d68e6907a7f"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.309953 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"dcce2e6b-c1ba-4d6b-a972-96f864bd3468","Type":"ContainerStarted","Data":"b59e8832e3b5c8ba98a8d0f5e17024031311a00c2589216f1afa8180074ec1a1"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.312020 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e","Type":"ContainerStarted","Data":"16c2a4bbc41175da072b99da4ca1ba00d2812e986a966b130e26808c90ce4e2c"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.312102 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e","Type":"ContainerStarted","Data":"0caf415999534f55d790eb96d86632a83ab6728565993aaeb533231f1ed34c3c"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.312118 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"26e0a32c-15a9-48e9-9ecc-97bfdbcc923e","Type":"ContainerStarted","Data":"5e566dab09e363574cd6d014375c329721565d59891a76a59c075430951c218c"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.317210 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8908e22b-933f-43ef-a88d-058610136209","Type":"ContainerStarted","Data":"cece0b6c0fe0d76dc6e72e9f1f569025f693f5c8b924b988988ef0d1e9e71bfe"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.317318 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8908e22b-933f-43ef-a88d-058610136209","Type":"ContainerStarted","Data":"938bd27fae337c61ab04c9b4ebae71adf4d51111d755127a9e0b5e5d2edb710c"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.317334 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8908e22b-933f-43ef-a88d-058610136209","Type":"ContainerStarted","Data":"af251e04db68e0c2b64b1a8041154a24146c595dad3a4cd1dee1b97661cee0b6"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.319789 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"16add7fd-19c1-4795-b9ba-f4e692b65fb6","Type":"ContainerStarted","Data":"ae491589adbf57983e42e8b1e3b6ff31782b1acd5e6c347c66fcd004f1520a74"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.319824 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"16add7fd-19c1-4795-b9ba-f4e692b65fb6","Type":"ContainerStarted","Data":"0d114c474f2af1b4adc12cd313f24d82ee3b5d668036e4b8c92e3cdca4c24847"} Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.335843 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.335817952 podStartE2EDuration="4.335817952s" podCreationTimestamp="2026-01-05 23:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:06.329442351 +0000 UTC m=+5298.701441810" watchObservedRunningTime="2026-01-05 23:20:06.335817952 +0000 UTC m=+5298.707817391" Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.359729 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.35969405 podStartE2EDuration="3.35969405s" podCreationTimestamp="2026-01-05 23:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:06.351566769 +0000 UTC m=+5298.723566208" watchObservedRunningTime="2026-01-05 23:20:06.35969405 +0000 UTC m=+5298.731693489" Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.380004 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.379976415 podStartE2EDuration="4.379976415s" podCreationTimestamp="2026-01-05 23:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:06.373152882 +0000 UTC m=+5298.745152321" watchObservedRunningTime="2026-01-05 23:20:06.379976415 +0000 UTC m=+5298.751975854" Jan 05 23:20:06 crc kubenswrapper[5034]: I0105 23:20:06.394272 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.39424841 podStartE2EDuration="3.39424841s" podCreationTimestamp="2026-01-05 23:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:06.391928575 +0000 UTC m=+5298.763928014" watchObservedRunningTime="2026-01-05 23:20:06.39424841 +0000 UTC m=+5298.766247849" Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.088630 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.140143 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.332927 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"687b551c-0fde-402a-abf5-076a664acdac","Type":"ContainerStarted","Data":"7463b4d5f70b377a08ac2774e4acc7ebb33cdde2b59696c69238a31675082747"} Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.333453 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.333497 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"687b551c-0fde-402a-abf5-076a664acdac","Type":"ContainerStarted","Data":"f16013abbbf35757c7eeed29f9d98c23e06a3c49440b331776ca84e26bcdea62"} Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.376753 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.376726983 podStartE2EDuration="4.376726983s" podCreationTimestamp="2026-01-05 23:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:07.364522467 +0000 UTC m=+5299.736521916" watchObservedRunningTime="2026-01-05 23:20:07.376726983 +0000 UTC m=+5299.748726422" Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.410873 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.428039 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.719036 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.778883 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:07 crc kubenswrapper[5034]: I0105 23:20:07.791837 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.131120 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.410802 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.428144 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.450259 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-8j6fj"] Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.451656 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.453523 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.510790 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-ovsdbserver-sb\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.510869 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-dns-svc\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.510965 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-config\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.511239 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-8j6fj"] Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.511749 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvwcf\" (UniqueName: \"kubernetes.io/projected/6cb70195-c67e-4f5b-971b-9cac1907ea8c-kube-api-access-gvwcf\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.613958 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvwcf\" (UniqueName: \"kubernetes.io/projected/6cb70195-c67e-4f5b-971b-9cac1907ea8c-kube-api-access-gvwcf\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.614107 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-ovsdbserver-sb\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.614141 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-dns-svc\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.614173 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-config\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.615215 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-config\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.616187 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-ovsdbserver-sb\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.616734 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-dns-svc\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.642644 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvwcf\" (UniqueName: \"kubernetes.io/projected/6cb70195-c67e-4f5b-971b-9cac1907ea8c-kube-api-access-gvwcf\") pod \"dnsmasq-dns-df6c6d7b7-8j6fj\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.718541 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.778704 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.779905 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:09 crc kubenswrapper[5034]: I0105 23:20:09.792482 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.311339 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-8j6fj"] Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.366631 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" event={"ID":"6cb70195-c67e-4f5b-971b-9cac1907ea8c","Type":"ContainerStarted","Data":"1b35f9c6dce476524db9b8c5433c3470b1f96532ff05c1f045eacf740d64b2a9"} Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.468608 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.475805 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.526311 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.536894 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.764001 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.820864 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.829979 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.851696 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:10 crc kubenswrapper[5034]: I0105 23:20:10.914406 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.064564 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-8j6fj"] Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.090539 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d6d46bc59-vwttr"] Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.092469 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.095852 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.109225 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d6d46bc59-vwttr"] Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.159436 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-dns-svc\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.159567 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-sb\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.159630 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwvwx\" (UniqueName: \"kubernetes.io/projected/f96f655c-7c9b-4f0e-a83d-153fbed2d771-kube-api-access-bwvwx\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.159654 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-nb\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.159696 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-config\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.261978 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-sb\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.262064 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwvwx\" (UniqueName: \"kubernetes.io/projected/f96f655c-7c9b-4f0e-a83d-153fbed2d771-kube-api-access-bwvwx\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.262107 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-nb\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.262149 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-config\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.262249 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-dns-svc\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.263179 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-sb\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.263288 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-dns-svc\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.263736 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-nb\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.263896 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-config\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.284277 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwvwx\" (UniqueName: \"kubernetes.io/projected/f96f655c-7c9b-4f0e-a83d-153fbed2d771-kube-api-access-bwvwx\") pod \"dnsmasq-dns-d6d46bc59-vwttr\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.376303 5034 generic.go:334] "Generic (PLEG): container finished" podID="6cb70195-c67e-4f5b-971b-9cac1907ea8c" containerID="80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace" exitCode=0 Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.376449 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" event={"ID":"6cb70195-c67e-4f5b-971b-9cac1907ea8c","Type":"ContainerDied","Data":"80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace"} Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.412891 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.422887 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 05 23:20:11 crc kubenswrapper[5034]: I0105 23:20:11.930039 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d6d46bc59-vwttr"] Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.392436 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" event={"ID":"6cb70195-c67e-4f5b-971b-9cac1907ea8c","Type":"ContainerStarted","Data":"62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad"} Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.393023 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.392522 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" podUID="6cb70195-c67e-4f5b-971b-9cac1907ea8c" containerName="dnsmasq-dns" containerID="cri-o://62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad" gracePeriod=10 Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.400163 5034 generic.go:334] "Generic (PLEG): container finished" podID="f96f655c-7c9b-4f0e-a83d-153fbed2d771" containerID="23587fb5548b4f93872185f7d12ac56ec53db9bdf33699b9a5ad094605f7938a" exitCode=0 Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.400231 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" event={"ID":"f96f655c-7c9b-4f0e-a83d-153fbed2d771","Type":"ContainerDied","Data":"23587fb5548b4f93872185f7d12ac56ec53db9bdf33699b9a5ad094605f7938a"} Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.400300 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" event={"ID":"f96f655c-7c9b-4f0e-a83d-153fbed2d771","Type":"ContainerStarted","Data":"74b95637d70f352219ca854f271d64365f7343482325df5e60264e834d6d86cb"} Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.431733 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" podStartSLOduration=3.43169879 podStartE2EDuration="3.43169879s" podCreationTimestamp="2026-01-05 23:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:12.420362819 +0000 UTC m=+5304.792362288" watchObservedRunningTime="2026-01-05 23:20:12.43169879 +0000 UTC m=+5304.803698229" Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.874364 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.905105 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-config\") pod \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.905857 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvwcf\" (UniqueName: \"kubernetes.io/projected/6cb70195-c67e-4f5b-971b-9cac1907ea8c-kube-api-access-gvwcf\") pod \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.905963 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-ovsdbserver-sb\") pod \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.906008 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-dns-svc\") pod \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\" (UID: \"6cb70195-c67e-4f5b-971b-9cac1907ea8c\") " Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.911147 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cb70195-c67e-4f5b-971b-9cac1907ea8c-kube-api-access-gvwcf" (OuterVolumeSpecName: "kube-api-access-gvwcf") pod "6cb70195-c67e-4f5b-971b-9cac1907ea8c" (UID: "6cb70195-c67e-4f5b-971b-9cac1907ea8c"). InnerVolumeSpecName "kube-api-access-gvwcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.959636 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-config" (OuterVolumeSpecName: "config") pod "6cb70195-c67e-4f5b-971b-9cac1907ea8c" (UID: "6cb70195-c67e-4f5b-971b-9cac1907ea8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.970928 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cb70195-c67e-4f5b-971b-9cac1907ea8c" (UID: "6cb70195-c67e-4f5b-971b-9cac1907ea8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:12 crc kubenswrapper[5034]: I0105 23:20:12.974622 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6cb70195-c67e-4f5b-971b-9cac1907ea8c" (UID: "6cb70195-c67e-4f5b-971b-9cac1907ea8c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.007767 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.007943 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.008026 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb70195-c67e-4f5b-971b-9cac1907ea8c-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.008136 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvwcf\" (UniqueName: \"kubernetes.io/projected/6cb70195-c67e-4f5b-971b-9cac1907ea8c-kube-api-access-gvwcf\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.410794 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" event={"ID":"f96f655c-7c9b-4f0e-a83d-153fbed2d771","Type":"ContainerStarted","Data":"6b99077d759d95bfe2ea9a5bca8c399631cdd895d2593f5157ab81360499c4b1"} Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.411901 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.412577 5034 generic.go:334] "Generic (PLEG): container finished" podID="6cb70195-c67e-4f5b-971b-9cac1907ea8c" containerID="62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad" exitCode=0 Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.412612 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" event={"ID":"6cb70195-c67e-4f5b-971b-9cac1907ea8c","Type":"ContainerDied","Data":"62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad"} Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.412635 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" event={"ID":"6cb70195-c67e-4f5b-971b-9cac1907ea8c","Type":"ContainerDied","Data":"1b35f9c6dce476524db9b8c5433c3470b1f96532ff05c1f045eacf740d64b2a9"} Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.412656 5034 scope.go:117] "RemoveContainer" containerID="62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.412616 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6c6d7b7-8j6fj" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.435613 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" podStartSLOduration=2.435582879 podStartE2EDuration="2.435582879s" podCreationTimestamp="2026-01-05 23:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:13.431710439 +0000 UTC m=+5305.803709908" watchObservedRunningTime="2026-01-05 23:20:13.435582879 +0000 UTC m=+5305.807582318" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.441871 5034 scope.go:117] "RemoveContainer" containerID="80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.458590 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-8j6fj"] Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.464928 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-8j6fj"] Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.478149 5034 scope.go:117] "RemoveContainer" containerID="62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad" Jan 05 23:20:13 crc kubenswrapper[5034]: E0105 23:20:13.478648 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad\": container with ID starting with 62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad not found: ID does not exist" containerID="62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.478708 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad"} err="failed to get container status \"62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad\": rpc error: code = NotFound desc = could not find container \"62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad\": container with ID starting with 62e56fb5f912a7818d030f750ef7657a17be15e246dd31fe038efc307b5966ad not found: ID does not exist" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.478744 5034 scope.go:117] "RemoveContainer" containerID="80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace" Jan 05 23:20:13 crc kubenswrapper[5034]: E0105 23:20:13.479339 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace\": container with ID starting with 80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace not found: ID does not exist" containerID="80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace" Jan 05 23:20:13 crc kubenswrapper[5034]: I0105 23:20:13.479398 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace"} err="failed to get container status \"80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace\": rpc error: code = NotFound desc = could not find container \"80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace\": container with ID starting with 80db762d5e68fed3dd072f2a485891caf20ecf2714489b8f63e2fabc85e52ace not found: ID does not exist" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.022571 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cb70195-c67e-4f5b-971b-9cac1907ea8c" path="/var/lib/kubelet/pods/6cb70195-c67e-4f5b-971b-9cac1907ea8c/volumes" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.622335 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 05 23:20:14 crc kubenswrapper[5034]: E0105 23:20:14.622939 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb70195-c67e-4f5b-971b-9cac1907ea8c" containerName="init" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.622964 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb70195-c67e-4f5b-971b-9cac1907ea8c" containerName="init" Jan 05 23:20:14 crc kubenswrapper[5034]: E0105 23:20:14.622986 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb70195-c67e-4f5b-971b-9cac1907ea8c" containerName="dnsmasq-dns" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.623002 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb70195-c67e-4f5b-971b-9cac1907ea8c" containerName="dnsmasq-dns" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.623541 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cb70195-c67e-4f5b-971b-9cac1907ea8c" containerName="dnsmasq-dns" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.624831 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.627478 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.647453 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.817279 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4a37fc3c-2fa1-40ac-9946-da27063ad270\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a37fc3c-2fa1-40ac-9946-da27063ad270\") pod \"ovn-copy-data\" (UID: \"3c6fe902-ce44-49f5-9131-1e83280ca4c0\") " pod="openstack/ovn-copy-data" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.817398 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3c6fe902-ce44-49f5-9131-1e83280ca4c0-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3c6fe902-ce44-49f5-9131-1e83280ca4c0\") " pod="openstack/ovn-copy-data" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.817434 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45mns\" (UniqueName: \"kubernetes.io/projected/3c6fe902-ce44-49f5-9131-1e83280ca4c0-kube-api-access-45mns\") pod \"ovn-copy-data\" (UID: \"3c6fe902-ce44-49f5-9131-1e83280ca4c0\") " pod="openstack/ovn-copy-data" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.919534 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4a37fc3c-2fa1-40ac-9946-da27063ad270\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a37fc3c-2fa1-40ac-9946-da27063ad270\") pod \"ovn-copy-data\" (UID: \"3c6fe902-ce44-49f5-9131-1e83280ca4c0\") " pod="openstack/ovn-copy-data" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.919645 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3c6fe902-ce44-49f5-9131-1e83280ca4c0-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3c6fe902-ce44-49f5-9131-1e83280ca4c0\") " pod="openstack/ovn-copy-data" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.919676 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45mns\" (UniqueName: \"kubernetes.io/projected/3c6fe902-ce44-49f5-9131-1e83280ca4c0-kube-api-access-45mns\") pod \"ovn-copy-data\" (UID: \"3c6fe902-ce44-49f5-9131-1e83280ca4c0\") " pod="openstack/ovn-copy-data" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.929417 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.929502 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4a37fc3c-2fa1-40ac-9946-da27063ad270\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a37fc3c-2fa1-40ac-9946-da27063ad270\") pod \"ovn-copy-data\" (UID: \"3c6fe902-ce44-49f5-9131-1e83280ca4c0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e5b91cd6fa4faf850883f3ef505d709e93ef25ddc04a34c4791933fe9559d7f3/globalmount\"" pod="openstack/ovn-copy-data" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.938811 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45mns\" (UniqueName: \"kubernetes.io/projected/3c6fe902-ce44-49f5-9131-1e83280ca4c0-kube-api-access-45mns\") pod \"ovn-copy-data\" (UID: \"3c6fe902-ce44-49f5-9131-1e83280ca4c0\") " pod="openstack/ovn-copy-data" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.940730 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3c6fe902-ce44-49f5-9131-1e83280ca4c0-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3c6fe902-ce44-49f5-9131-1e83280ca4c0\") " pod="openstack/ovn-copy-data" Jan 05 23:20:14 crc kubenswrapper[5034]: I0105 23:20:14.986523 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4a37fc3c-2fa1-40ac-9946-da27063ad270\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a37fc3c-2fa1-40ac-9946-da27063ad270\") pod \"ovn-copy-data\" (UID: \"3c6fe902-ce44-49f5-9131-1e83280ca4c0\") " pod="openstack/ovn-copy-data" Jan 05 23:20:15 crc kubenswrapper[5034]: I0105 23:20:15.263542 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 05 23:20:15 crc kubenswrapper[5034]: I0105 23:20:15.820918 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 05 23:20:15 crc kubenswrapper[5034]: W0105 23:20:15.827661 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c6fe902_ce44_49f5_9131_1e83280ca4c0.slice/crio-4c6b3e432a286e5cc3a1e210a9722fa2773841b9aa98254d77059edc31f3b1a0 WatchSource:0}: Error finding container 4c6b3e432a286e5cc3a1e210a9722fa2773841b9aa98254d77059edc31f3b1a0: Status 404 returned error can't find the container with id 4c6b3e432a286e5cc3a1e210a9722fa2773841b9aa98254d77059edc31f3b1a0 Jan 05 23:20:16 crc kubenswrapper[5034]: I0105 23:20:16.448158 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3c6fe902-ce44-49f5-9131-1e83280ca4c0","Type":"ContainerStarted","Data":"4c6b3e432a286e5cc3a1e210a9722fa2773841b9aa98254d77059edc31f3b1a0"} Jan 05 23:20:17 crc kubenswrapper[5034]: I0105 23:20:17.461041 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3c6fe902-ce44-49f5-9131-1e83280ca4c0","Type":"ContainerStarted","Data":"df619bea169f867b6066ad7063c8603955ca2259f1098208000087812bbba7ac"} Jan 05 23:20:17 crc kubenswrapper[5034]: I0105 23:20:17.490471 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.836837535 podStartE2EDuration="4.490442104s" podCreationTimestamp="2026-01-05 23:20:13 +0000 UTC" firstStartedPulling="2026-01-05 23:20:15.830040443 +0000 UTC m=+5308.202039882" lastFinishedPulling="2026-01-05 23:20:16.483645012 +0000 UTC m=+5308.855644451" observedRunningTime="2026-01-05 23:20:17.480667737 +0000 UTC m=+5309.852667206" watchObservedRunningTime="2026-01-05 23:20:17.490442104 +0000 UTC m=+5309.862441553" Jan 05 23:20:21 crc kubenswrapper[5034]: I0105 23:20:21.415340 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:21 crc kubenswrapper[5034]: I0105 23:20:21.485194 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-fb9jd"] Jan 05 23:20:21 crc kubenswrapper[5034]: I0105 23:20:21.485478 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" podUID="8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" containerName="dnsmasq-dns" containerID="cri-o://1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845" gracePeriod=10 Jan 05 23:20:21 crc kubenswrapper[5034]: I0105 23:20:21.994834 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.167332 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-dns-svc\") pod \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.167626 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-config\") pod \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.167665 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbptg\" (UniqueName: \"kubernetes.io/projected/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-kube-api-access-pbptg\") pod \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\" (UID: \"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8\") " Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.176002 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-kube-api-access-pbptg" (OuterVolumeSpecName: "kube-api-access-pbptg") pod "8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" (UID: "8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8"). InnerVolumeSpecName "kube-api-access-pbptg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.212014 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" (UID: "8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.237725 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-config" (OuterVolumeSpecName: "config") pod "8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" (UID: "8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.270049 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.270108 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.270119 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbptg\" (UniqueName: \"kubernetes.io/projected/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8-kube-api-access-pbptg\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.508207 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.508402 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" event={"ID":"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8","Type":"ContainerDied","Data":"1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845"} Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.508337 5034 generic.go:334] "Generic (PLEG): container finished" podID="8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" containerID="1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845" exitCode=0 Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.509601 5034 scope.go:117] "RemoveContainer" containerID="1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.511386 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-fb9jd" event={"ID":"8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8","Type":"ContainerDied","Data":"f202225cfef07365a1453510435cd5b103de27e779542a824a0c4bf592150b03"} Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.553893 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-fb9jd"] Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.559667 5034 scope.go:117] "RemoveContainer" containerID="84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.561462 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-fb9jd"] Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.589752 5034 scope.go:117] "RemoveContainer" containerID="1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845" Jan 05 23:20:22 crc kubenswrapper[5034]: E0105 23:20:22.590607 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845\": container with ID starting with 1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845 not found: ID does not exist" containerID="1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.590691 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845"} err="failed to get container status \"1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845\": rpc error: code = NotFound desc = could not find container \"1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845\": container with ID starting with 1b39225bafc68550f42738532471de3443320371298bfbba22bf3fadfa3fe845 not found: ID does not exist" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.590733 5034 scope.go:117] "RemoveContainer" containerID="84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72" Jan 05 23:20:22 crc kubenswrapper[5034]: E0105 23:20:22.591717 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72\": container with ID starting with 84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72 not found: ID does not exist" containerID="84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72" Jan 05 23:20:22 crc kubenswrapper[5034]: I0105 23:20:22.592002 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72"} err="failed to get container status \"84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72\": rpc error: code = NotFound desc = could not find container \"84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72\": container with ID starting with 84400b77e138427d08c9ccb7fcff86a63ab1929f9ef61ca298a4294db25d6d72 not found: ID does not exist" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.734629 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 05 23:20:23 crc kubenswrapper[5034]: E0105 23:20:23.735776 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" containerName="init" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.735797 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" containerName="init" Jan 05 23:20:23 crc kubenswrapper[5034]: E0105 23:20:23.735838 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" containerName="dnsmasq-dns" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.735846 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" containerName="dnsmasq-dns" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.736065 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" containerName="dnsmasq-dns" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.737499 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.744642 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.745405 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vxwql" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.745718 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.747618 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.747746 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.850070 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8" path="/var/lib/kubelet/pods/8c8c7cf0-6bc1-4a94-97a4-a60bcec994e8/volumes" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.903264 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.903348 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq4fm\" (UniqueName: \"kubernetes.io/projected/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-kube-api-access-hq4fm\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.903486 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-config\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.903541 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.903683 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.903724 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:23 crc kubenswrapper[5034]: I0105 23:20:23.903840 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-scripts\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.005522 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-scripts\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.005604 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.005657 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq4fm\" (UniqueName: \"kubernetes.io/projected/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-kube-api-access-hq4fm\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.005685 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-config\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.005706 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.005815 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.005850 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.006410 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.007102 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-scripts\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.007150 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-config\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.013315 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.018763 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.019786 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.031909 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq4fm\" (UniqueName: \"kubernetes.io/projected/f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a-kube-api-access-hq4fm\") pod \"ovn-northd-0\" (UID: \"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a\") " pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.078597 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 05 23:20:24 crc kubenswrapper[5034]: I0105 23:20:24.591757 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 23:20:25 crc kubenswrapper[5034]: I0105 23:20:25.546109 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a","Type":"ContainerStarted","Data":"fd279b16b15d0bbf4f1e47c7d290d4ef2c1bc85eca0ea8920e3c5c68c2ef2884"} Jan 05 23:20:25 crc kubenswrapper[5034]: I0105 23:20:25.548668 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 05 23:20:25 crc kubenswrapper[5034]: I0105 23:20:25.548707 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a","Type":"ContainerStarted","Data":"d95c0b745a4e20395e0e6616a649b94faafa29393f834be065debec077091f0e"} Jan 05 23:20:25 crc kubenswrapper[5034]: I0105 23:20:25.548725 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a","Type":"ContainerStarted","Data":"9ef964e4dc8b6ca38c78ff01f355308f9d33435a46b29ceefce6d6145a25aeea"} Jan 05 23:20:25 crc kubenswrapper[5034]: I0105 23:20:25.576803 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.57677844 podStartE2EDuration="2.57677844s" podCreationTimestamp="2026-01-05 23:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:25.571308745 +0000 UTC m=+5317.943308184" watchObservedRunningTime="2026-01-05 23:20:25.57677844 +0000 UTC m=+5317.948777879" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.711491 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7n6zf"] Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.713831 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7n6zf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.730744 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-71ef-account-create-update-7wzzf"] Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.732197 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-71ef-account-create-update-7wzzf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.740562 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.751593 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7n6zf"] Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.764986 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-71ef-account-create-update-7wzzf"] Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.855530 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae531dad-14d8-4872-8542-8b1c6fd9e388-operator-scripts\") pod \"keystone-db-create-7n6zf\" (UID: \"ae531dad-14d8-4872-8542-8b1c6fd9e388\") " pod="openstack/keystone-db-create-7n6zf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.855583 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xngt2\" (UniqueName: \"kubernetes.io/projected/72f5d690-a7b2-4057-bc84-4108941c17ca-kube-api-access-xngt2\") pod \"keystone-71ef-account-create-update-7wzzf\" (UID: \"72f5d690-a7b2-4057-bc84-4108941c17ca\") " pod="openstack/keystone-71ef-account-create-update-7wzzf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.855746 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72f5d690-a7b2-4057-bc84-4108941c17ca-operator-scripts\") pod \"keystone-71ef-account-create-update-7wzzf\" (UID: \"72f5d690-a7b2-4057-bc84-4108941c17ca\") " pod="openstack/keystone-71ef-account-create-update-7wzzf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.855894 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp96h\" (UniqueName: \"kubernetes.io/projected/ae531dad-14d8-4872-8542-8b1c6fd9e388-kube-api-access-wp96h\") pod \"keystone-db-create-7n6zf\" (UID: \"ae531dad-14d8-4872-8542-8b1c6fd9e388\") " pod="openstack/keystone-db-create-7n6zf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.958652 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae531dad-14d8-4872-8542-8b1c6fd9e388-operator-scripts\") pod \"keystone-db-create-7n6zf\" (UID: \"ae531dad-14d8-4872-8542-8b1c6fd9e388\") " pod="openstack/keystone-db-create-7n6zf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.958765 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xngt2\" (UniqueName: \"kubernetes.io/projected/72f5d690-a7b2-4057-bc84-4108941c17ca-kube-api-access-xngt2\") pod \"keystone-71ef-account-create-update-7wzzf\" (UID: \"72f5d690-a7b2-4057-bc84-4108941c17ca\") " pod="openstack/keystone-71ef-account-create-update-7wzzf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.959485 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae531dad-14d8-4872-8542-8b1c6fd9e388-operator-scripts\") pod \"keystone-db-create-7n6zf\" (UID: \"ae531dad-14d8-4872-8542-8b1c6fd9e388\") " pod="openstack/keystone-db-create-7n6zf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.960926 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72f5d690-a7b2-4057-bc84-4108941c17ca-operator-scripts\") pod \"keystone-71ef-account-create-update-7wzzf\" (UID: \"72f5d690-a7b2-4057-bc84-4108941c17ca\") " pod="openstack/keystone-71ef-account-create-update-7wzzf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.961462 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp96h\" (UniqueName: \"kubernetes.io/projected/ae531dad-14d8-4872-8542-8b1c6fd9e388-kube-api-access-wp96h\") pod \"keystone-db-create-7n6zf\" (UID: \"ae531dad-14d8-4872-8542-8b1c6fd9e388\") " pod="openstack/keystone-db-create-7n6zf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.962416 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72f5d690-a7b2-4057-bc84-4108941c17ca-operator-scripts\") pod \"keystone-71ef-account-create-update-7wzzf\" (UID: \"72f5d690-a7b2-4057-bc84-4108941c17ca\") " pod="openstack/keystone-71ef-account-create-update-7wzzf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.986773 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp96h\" (UniqueName: \"kubernetes.io/projected/ae531dad-14d8-4872-8542-8b1c6fd9e388-kube-api-access-wp96h\") pod \"keystone-db-create-7n6zf\" (UID: \"ae531dad-14d8-4872-8542-8b1c6fd9e388\") " pod="openstack/keystone-db-create-7n6zf" Jan 05 23:20:29 crc kubenswrapper[5034]: I0105 23:20:29.992817 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xngt2\" (UniqueName: \"kubernetes.io/projected/72f5d690-a7b2-4057-bc84-4108941c17ca-kube-api-access-xngt2\") pod \"keystone-71ef-account-create-update-7wzzf\" (UID: \"72f5d690-a7b2-4057-bc84-4108941c17ca\") " pod="openstack/keystone-71ef-account-create-update-7wzzf" Jan 05 23:20:30 crc kubenswrapper[5034]: I0105 23:20:30.113515 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7n6zf" Jan 05 23:20:30 crc kubenswrapper[5034]: I0105 23:20:30.127413 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-71ef-account-create-update-7wzzf" Jan 05 23:20:30 crc kubenswrapper[5034]: I0105 23:20:30.600307 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-71ef-account-create-update-7wzzf"] Jan 05 23:20:30 crc kubenswrapper[5034]: I0105 23:20:30.633295 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7n6zf"] Jan 05 23:20:31 crc kubenswrapper[5034]: I0105 23:20:31.631273 5034 generic.go:334] "Generic (PLEG): container finished" podID="72f5d690-a7b2-4057-bc84-4108941c17ca" containerID="a017c718f0983b159a48d8593cf86e5d7ecdda530f84f9fe2ecab609b4641785" exitCode=0 Jan 05 23:20:31 crc kubenswrapper[5034]: I0105 23:20:31.631360 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-71ef-account-create-update-7wzzf" event={"ID":"72f5d690-a7b2-4057-bc84-4108941c17ca","Type":"ContainerDied","Data":"a017c718f0983b159a48d8593cf86e5d7ecdda530f84f9fe2ecab609b4641785"} Jan 05 23:20:31 crc kubenswrapper[5034]: I0105 23:20:31.631736 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-71ef-account-create-update-7wzzf" event={"ID":"72f5d690-a7b2-4057-bc84-4108941c17ca","Type":"ContainerStarted","Data":"2f5f733d7e4014b54eb1b11316ce7cd75d0683ba0952c7d3de99000813a98774"} Jan 05 23:20:31 crc kubenswrapper[5034]: I0105 23:20:31.633813 5034 generic.go:334] "Generic (PLEG): container finished" podID="ae531dad-14d8-4872-8542-8b1c6fd9e388" containerID="43bda9d75dbee027a1cee3f2555f210514c0173269b3ae446477919b0aecae48" exitCode=0 Jan 05 23:20:31 crc kubenswrapper[5034]: I0105 23:20:31.633852 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7n6zf" event={"ID":"ae531dad-14d8-4872-8542-8b1c6fd9e388","Type":"ContainerDied","Data":"43bda9d75dbee027a1cee3f2555f210514c0173269b3ae446477919b0aecae48"} Jan 05 23:20:31 crc kubenswrapper[5034]: I0105 23:20:31.633876 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7n6zf" event={"ID":"ae531dad-14d8-4872-8542-8b1c6fd9e388","Type":"ContainerStarted","Data":"790441cc7a539ebb2c51223963637c9b2a2a394d16e79b5bb9ab0a101384aebb"} Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.137613 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7n6zf" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.145053 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-71ef-account-create-update-7wzzf" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.228684 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72f5d690-a7b2-4057-bc84-4108941c17ca-operator-scripts\") pod \"72f5d690-a7b2-4057-bc84-4108941c17ca\" (UID: \"72f5d690-a7b2-4057-bc84-4108941c17ca\") " Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.229383 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae531dad-14d8-4872-8542-8b1c6fd9e388-operator-scripts\") pod \"ae531dad-14d8-4872-8542-8b1c6fd9e388\" (UID: \"ae531dad-14d8-4872-8542-8b1c6fd9e388\") " Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.229451 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xngt2\" (UniqueName: \"kubernetes.io/projected/72f5d690-a7b2-4057-bc84-4108941c17ca-kube-api-access-xngt2\") pod \"72f5d690-a7b2-4057-bc84-4108941c17ca\" (UID: \"72f5d690-a7b2-4057-bc84-4108941c17ca\") " Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.229645 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72f5d690-a7b2-4057-bc84-4108941c17ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72f5d690-a7b2-4057-bc84-4108941c17ca" (UID: "72f5d690-a7b2-4057-bc84-4108941c17ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.229671 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp96h\" (UniqueName: \"kubernetes.io/projected/ae531dad-14d8-4872-8542-8b1c6fd9e388-kube-api-access-wp96h\") pod \"ae531dad-14d8-4872-8542-8b1c6fd9e388\" (UID: \"ae531dad-14d8-4872-8542-8b1c6fd9e388\") " Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.229932 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae531dad-14d8-4872-8542-8b1c6fd9e388-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae531dad-14d8-4872-8542-8b1c6fd9e388" (UID: "ae531dad-14d8-4872-8542-8b1c6fd9e388"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.230473 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72f5d690-a7b2-4057-bc84-4108941c17ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.230498 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae531dad-14d8-4872-8542-8b1c6fd9e388-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.268695 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae531dad-14d8-4872-8542-8b1c6fd9e388-kube-api-access-wp96h" (OuterVolumeSpecName: "kube-api-access-wp96h") pod "ae531dad-14d8-4872-8542-8b1c6fd9e388" (UID: "ae531dad-14d8-4872-8542-8b1c6fd9e388"). InnerVolumeSpecName "kube-api-access-wp96h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.268742 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f5d690-a7b2-4057-bc84-4108941c17ca-kube-api-access-xngt2" (OuterVolumeSpecName: "kube-api-access-xngt2") pod "72f5d690-a7b2-4057-bc84-4108941c17ca" (UID: "72f5d690-a7b2-4057-bc84-4108941c17ca"). InnerVolumeSpecName "kube-api-access-xngt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.332591 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xngt2\" (UniqueName: \"kubernetes.io/projected/72f5d690-a7b2-4057-bc84-4108941c17ca-kube-api-access-xngt2\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.332652 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp96h\" (UniqueName: \"kubernetes.io/projected/ae531dad-14d8-4872-8542-8b1c6fd9e388-kube-api-access-wp96h\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.660007 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7n6zf" event={"ID":"ae531dad-14d8-4872-8542-8b1c6fd9e388","Type":"ContainerDied","Data":"790441cc7a539ebb2c51223963637c9b2a2a394d16e79b5bb9ab0a101384aebb"} Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.660142 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="790441cc7a539ebb2c51223963637c9b2a2a394d16e79b5bb9ab0a101384aebb" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.660152 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7n6zf" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.662313 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-71ef-account-create-update-7wzzf" event={"ID":"72f5d690-a7b2-4057-bc84-4108941c17ca","Type":"ContainerDied","Data":"2f5f733d7e4014b54eb1b11316ce7cd75d0683ba0952c7d3de99000813a98774"} Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.662377 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f5f733d7e4014b54eb1b11316ce7cd75d0683ba0952c7d3de99000813a98774" Jan 05 23:20:33 crc kubenswrapper[5034]: I0105 23:20:33.662412 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-71ef-account-create-update-7wzzf" Jan 05 23:20:34 crc kubenswrapper[5034]: I0105 23:20:34.217400 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.381852 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8qsjc"] Jan 05 23:20:35 crc kubenswrapper[5034]: E0105 23:20:35.382586 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f5d690-a7b2-4057-bc84-4108941c17ca" containerName="mariadb-account-create-update" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.382603 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f5d690-a7b2-4057-bc84-4108941c17ca" containerName="mariadb-account-create-update" Jan 05 23:20:35 crc kubenswrapper[5034]: E0105 23:20:35.382631 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae531dad-14d8-4872-8542-8b1c6fd9e388" containerName="mariadb-database-create" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.382637 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae531dad-14d8-4872-8542-8b1c6fd9e388" containerName="mariadb-database-create" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.382781 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae531dad-14d8-4872-8542-8b1c6fd9e388" containerName="mariadb-database-create" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.382800 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f5d690-a7b2-4057-bc84-4108941c17ca" containerName="mariadb-account-create-update" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.383441 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.389630 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.389887 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.389967 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.392195 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gj85k" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.399193 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8qsjc"] Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.474805 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-config-data\") pod \"keystone-db-sync-8qsjc\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.475483 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7n8m\" (UniqueName: \"kubernetes.io/projected/219ee213-be92-46e6-ab65-d0dbc0d6ac85-kube-api-access-p7n8m\") pod \"keystone-db-sync-8qsjc\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.475583 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-combined-ca-bundle\") pod \"keystone-db-sync-8qsjc\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.577047 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7n8m\" (UniqueName: \"kubernetes.io/projected/219ee213-be92-46e6-ab65-d0dbc0d6ac85-kube-api-access-p7n8m\") pod \"keystone-db-sync-8qsjc\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.577751 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-combined-ca-bundle\") pod \"keystone-db-sync-8qsjc\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.578754 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-config-data\") pod \"keystone-db-sync-8qsjc\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.583093 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-combined-ca-bundle\") pod \"keystone-db-sync-8qsjc\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.586461 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-config-data\") pod \"keystone-db-sync-8qsjc\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.593891 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7n8m\" (UniqueName: \"kubernetes.io/projected/219ee213-be92-46e6-ab65-d0dbc0d6ac85-kube-api-access-p7n8m\") pod \"keystone-db-sync-8qsjc\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:35 crc kubenswrapper[5034]: I0105 23:20:35.755749 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:36 crc kubenswrapper[5034]: I0105 23:20:36.216502 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8qsjc"] Jan 05 23:20:36 crc kubenswrapper[5034]: I0105 23:20:36.684758 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8qsjc" event={"ID":"219ee213-be92-46e6-ab65-d0dbc0d6ac85","Type":"ContainerStarted","Data":"50f1fcc01b16dbf9bbff1097ab95070080557e928844c4d854cedb2e00b93fc6"} Jan 05 23:20:36 crc kubenswrapper[5034]: I0105 23:20:36.690245 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8qsjc" event={"ID":"219ee213-be92-46e6-ab65-d0dbc0d6ac85","Type":"ContainerStarted","Data":"c71d7bc80f7d9c25f83cc26945b2f887d951089328ddb16ecc09ef03338feac7"} Jan 05 23:20:36 crc kubenswrapper[5034]: I0105 23:20:36.703831 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8qsjc" podStartSLOduration=1.7038009490000001 podStartE2EDuration="1.703800949s" podCreationTimestamp="2026-01-05 23:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:36.700819164 +0000 UTC m=+5329.072818613" watchObservedRunningTime="2026-01-05 23:20:36.703800949 +0000 UTC m=+5329.075800388" Jan 05 23:20:38 crc kubenswrapper[5034]: I0105 23:20:38.702463 5034 generic.go:334] "Generic (PLEG): container finished" podID="219ee213-be92-46e6-ab65-d0dbc0d6ac85" containerID="50f1fcc01b16dbf9bbff1097ab95070080557e928844c4d854cedb2e00b93fc6" exitCode=0 Jan 05 23:20:38 crc kubenswrapper[5034]: I0105 23:20:38.702603 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8qsjc" event={"ID":"219ee213-be92-46e6-ab65-d0dbc0d6ac85","Type":"ContainerDied","Data":"50f1fcc01b16dbf9bbff1097ab95070080557e928844c4d854cedb2e00b93fc6"} Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.053877 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.159820 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7n8m\" (UniqueName: \"kubernetes.io/projected/219ee213-be92-46e6-ab65-d0dbc0d6ac85-kube-api-access-p7n8m\") pod \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.159999 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-combined-ca-bundle\") pod \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.160066 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-config-data\") pod \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\" (UID: \"219ee213-be92-46e6-ab65-d0dbc0d6ac85\") " Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.165558 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219ee213-be92-46e6-ab65-d0dbc0d6ac85-kube-api-access-p7n8m" (OuterVolumeSpecName: "kube-api-access-p7n8m") pod "219ee213-be92-46e6-ab65-d0dbc0d6ac85" (UID: "219ee213-be92-46e6-ab65-d0dbc0d6ac85"). InnerVolumeSpecName "kube-api-access-p7n8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.183994 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "219ee213-be92-46e6-ab65-d0dbc0d6ac85" (UID: "219ee213-be92-46e6-ab65-d0dbc0d6ac85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.201380 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-config-data" (OuterVolumeSpecName: "config-data") pod "219ee213-be92-46e6-ab65-d0dbc0d6ac85" (UID: "219ee213-be92-46e6-ab65-d0dbc0d6ac85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.261986 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.262022 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7n8m\" (UniqueName: \"kubernetes.io/projected/219ee213-be92-46e6-ab65-d0dbc0d6ac85-kube-api-access-p7n8m\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.262033 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219ee213-be92-46e6-ab65-d0dbc0d6ac85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.720999 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8qsjc" event={"ID":"219ee213-be92-46e6-ab65-d0dbc0d6ac85","Type":"ContainerDied","Data":"c71d7bc80f7d9c25f83cc26945b2f887d951089328ddb16ecc09ef03338feac7"} Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.721045 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8qsjc" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.721089 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71d7bc80f7d9c25f83cc26945b2f887d951089328ddb16ecc09ef03338feac7" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.976462 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f46ff4fd9-ksdjt"] Jan 05 23:20:40 crc kubenswrapper[5034]: E0105 23:20:40.978421 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219ee213-be92-46e6-ab65-d0dbc0d6ac85" containerName="keystone-db-sync" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.978466 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="219ee213-be92-46e6-ab65-d0dbc0d6ac85" containerName="keystone-db-sync" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.978720 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="219ee213-be92-46e6-ab65-d0dbc0d6ac85" containerName="keystone-db-sync" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.981817 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:40 crc kubenswrapper[5034]: I0105 23:20:40.997856 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f46ff4fd9-ksdjt"] Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.036101 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6m5cw"] Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.044279 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.047168 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.047373 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gj85k" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.047532 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.047698 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.047869 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.059640 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6m5cw"] Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.079316 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-config\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.079446 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-dns-svc\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.079505 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.079882 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.080120 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbchw\" (UniqueName: \"kubernetes.io/projected/a60218b9-937a-431f-8849-babc2ca5e2c3-kube-api-access-lbchw\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.182489 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbchw\" (UniqueName: \"kubernetes.io/projected/a60218b9-937a-431f-8849-babc2ca5e2c3-kube-api-access-lbchw\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.182651 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-config\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.182688 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-combined-ca-bundle\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.182732 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-fernet-keys\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.182849 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwdpl\" (UniqueName: \"kubernetes.io/projected/ca03b64a-e1d6-481f-9e90-cd2837d39acd-kube-api-access-rwdpl\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.183004 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-credential-keys\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.183039 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-dns-svc\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.183148 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-scripts\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.183230 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.183273 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-config-data\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.183411 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.183742 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-config\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.184105 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.184385 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.184710 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-dns-svc\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.203238 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbchw\" (UniqueName: \"kubernetes.io/projected/a60218b9-937a-431f-8849-babc2ca5e2c3-kube-api-access-lbchw\") pod \"dnsmasq-dns-5f46ff4fd9-ksdjt\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.285396 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-config-data\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.285531 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-combined-ca-bundle\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.285559 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-fernet-keys\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.285583 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwdpl\" (UniqueName: \"kubernetes.io/projected/ca03b64a-e1d6-481f-9e90-cd2837d39acd-kube-api-access-rwdpl\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.285632 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-credential-keys\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.285671 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-scripts\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.290404 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-scripts\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.290546 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-fernet-keys\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.290634 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-credential-keys\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.290710 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-combined-ca-bundle\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.291917 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-config-data\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.299922 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.303866 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwdpl\" (UniqueName: \"kubernetes.io/projected/ca03b64a-e1d6-481f-9e90-cd2837d39acd-kube-api-access-rwdpl\") pod \"keystone-bootstrap-6m5cw\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.364234 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.868141 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f46ff4fd9-ksdjt"] Jan 05 23:20:41 crc kubenswrapper[5034]: I0105 23:20:41.906267 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6m5cw"] Jan 05 23:20:42 crc kubenswrapper[5034]: I0105 23:20:42.749749 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6m5cw" event={"ID":"ca03b64a-e1d6-481f-9e90-cd2837d39acd","Type":"ContainerStarted","Data":"105873b758b91d7e799060493683cdf41aac458ae64818dd4b2c31527a5af0a4"} Jan 05 23:20:42 crc kubenswrapper[5034]: I0105 23:20:42.750028 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6m5cw" event={"ID":"ca03b64a-e1d6-481f-9e90-cd2837d39acd","Type":"ContainerStarted","Data":"4bb065456db180ea8b8e72a7836be86de44916088acf9afeca3ec736e5b34ac9"} Jan 05 23:20:42 crc kubenswrapper[5034]: I0105 23:20:42.751426 5034 generic.go:334] "Generic (PLEG): container finished" podID="a60218b9-937a-431f-8849-babc2ca5e2c3" containerID="9a5a81c2bb56856daeae935d22f02489c648e4c026af84a4c34531a94ad96eea" exitCode=0 Jan 05 23:20:42 crc kubenswrapper[5034]: I0105 23:20:42.751457 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" event={"ID":"a60218b9-937a-431f-8849-babc2ca5e2c3","Type":"ContainerDied","Data":"9a5a81c2bb56856daeae935d22f02489c648e4c026af84a4c34531a94ad96eea"} Jan 05 23:20:42 crc kubenswrapper[5034]: I0105 23:20:42.751472 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" event={"ID":"a60218b9-937a-431f-8849-babc2ca5e2c3","Type":"ContainerStarted","Data":"71859ded9127eee07cfbcd311cc59d12a089afc98038ba5863d716cff9dc6a1e"} Jan 05 23:20:42 crc kubenswrapper[5034]: I0105 23:20:42.785431 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6m5cw" podStartSLOduration=2.78536652 podStartE2EDuration="2.78536652s" podCreationTimestamp="2026-01-05 23:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:42.782579451 +0000 UTC m=+5335.154578890" watchObservedRunningTime="2026-01-05 23:20:42.78536652 +0000 UTC m=+5335.157365979" Jan 05 23:20:43 crc kubenswrapper[5034]: I0105 23:20:43.764020 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" event={"ID":"a60218b9-937a-431f-8849-babc2ca5e2c3","Type":"ContainerStarted","Data":"1a0310e236a951b75eb049f6f13175d8ff0fc02d9fe78d90ebb144438922e2ff"} Jan 05 23:20:43 crc kubenswrapper[5034]: I0105 23:20:43.764389 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:43 crc kubenswrapper[5034]: I0105 23:20:43.790222 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" podStartSLOduration=3.790189806 podStartE2EDuration="3.790189806s" podCreationTimestamp="2026-01-05 23:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:43.78116287 +0000 UTC m=+5336.153162319" watchObservedRunningTime="2026-01-05 23:20:43.790189806 +0000 UTC m=+5336.162189245" Jan 05 23:20:45 crc kubenswrapper[5034]: I0105 23:20:45.780596 5034 generic.go:334] "Generic (PLEG): container finished" podID="ca03b64a-e1d6-481f-9e90-cd2837d39acd" containerID="105873b758b91d7e799060493683cdf41aac458ae64818dd4b2c31527a5af0a4" exitCode=0 Jan 05 23:20:45 crc kubenswrapper[5034]: I0105 23:20:45.780825 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6m5cw" event={"ID":"ca03b64a-e1d6-481f-9e90-cd2837d39acd","Type":"ContainerDied","Data":"105873b758b91d7e799060493683cdf41aac458ae64818dd4b2c31527a5af0a4"} Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.210892 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.299349 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-credential-keys\") pod \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.299472 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-scripts\") pod \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.299511 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-combined-ca-bundle\") pod \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.299611 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-config-data\") pod \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.299667 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-fernet-keys\") pod \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.299743 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwdpl\" (UniqueName: \"kubernetes.io/projected/ca03b64a-e1d6-481f-9e90-cd2837d39acd-kube-api-access-rwdpl\") pod \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\" (UID: \"ca03b64a-e1d6-481f-9e90-cd2837d39acd\") " Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.307259 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca03b64a-e1d6-481f-9e90-cd2837d39acd-kube-api-access-rwdpl" (OuterVolumeSpecName: "kube-api-access-rwdpl") pod "ca03b64a-e1d6-481f-9e90-cd2837d39acd" (UID: "ca03b64a-e1d6-481f-9e90-cd2837d39acd"). InnerVolumeSpecName "kube-api-access-rwdpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.307475 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-scripts" (OuterVolumeSpecName: "scripts") pod "ca03b64a-e1d6-481f-9e90-cd2837d39acd" (UID: "ca03b64a-e1d6-481f-9e90-cd2837d39acd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.308500 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ca03b64a-e1d6-481f-9e90-cd2837d39acd" (UID: "ca03b64a-e1d6-481f-9e90-cd2837d39acd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.309565 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ca03b64a-e1d6-481f-9e90-cd2837d39acd" (UID: "ca03b64a-e1d6-481f-9e90-cd2837d39acd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.327953 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca03b64a-e1d6-481f-9e90-cd2837d39acd" (UID: "ca03b64a-e1d6-481f-9e90-cd2837d39acd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.329697 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-config-data" (OuterVolumeSpecName: "config-data") pod "ca03b64a-e1d6-481f-9e90-cd2837d39acd" (UID: "ca03b64a-e1d6-481f-9e90-cd2837d39acd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.402691 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.402729 5034 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.402741 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwdpl\" (UniqueName: \"kubernetes.io/projected/ca03b64a-e1d6-481f-9e90-cd2837d39acd-kube-api-access-rwdpl\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.402753 5034 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.402763 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.402775 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca03b64a-e1d6-481f-9e90-cd2837d39acd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.801124 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6m5cw" event={"ID":"ca03b64a-e1d6-481f-9e90-cd2837d39acd","Type":"ContainerDied","Data":"4bb065456db180ea8b8e72a7836be86de44916088acf9afeca3ec736e5b34ac9"} Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.801181 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb065456db180ea8b8e72a7836be86de44916088acf9afeca3ec736e5b34ac9" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.801193 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6m5cw" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.889546 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6m5cw"] Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.900688 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6m5cw"] Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.983493 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hlvmj"] Jan 05 23:20:47 crc kubenswrapper[5034]: E0105 23:20:47.983972 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca03b64a-e1d6-481f-9e90-cd2837d39acd" containerName="keystone-bootstrap" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.983997 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca03b64a-e1d6-481f-9e90-cd2837d39acd" containerName="keystone-bootstrap" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.984297 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca03b64a-e1d6-481f-9e90-cd2837d39acd" containerName="keystone-bootstrap" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.986265 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.990516 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.990526 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 23:20:47 crc kubenswrapper[5034]: I0105 23:20:47.992279 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.000488 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.001141 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gj85k" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.003720 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hlvmj"] Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.119540 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-config-data\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.119614 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q8dn\" (UniqueName: \"kubernetes.io/projected/e91a628a-9407-496c-96fa-25985569f851-kube-api-access-8q8dn\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.119713 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-scripts\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.119744 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-credential-keys\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.119762 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-fernet-keys\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.119816 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-combined-ca-bundle\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.222123 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-combined-ca-bundle\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.222253 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-config-data\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.222303 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q8dn\" (UniqueName: \"kubernetes.io/projected/e91a628a-9407-496c-96fa-25985569f851-kube-api-access-8q8dn\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.222393 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-scripts\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.222446 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-credential-keys\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.222498 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-fernet-keys\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.228519 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-credential-keys\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.229552 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-fernet-keys\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.231618 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-scripts\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.231637 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-config-data\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.232352 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-combined-ca-bundle\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.246862 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q8dn\" (UniqueName: \"kubernetes.io/projected/e91a628a-9407-496c-96fa-25985569f851-kube-api-access-8q8dn\") pod \"keystone-bootstrap-hlvmj\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.310382 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.780061 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hlvmj"] Jan 05 23:20:48 crc kubenswrapper[5034]: W0105 23:20:48.786229 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode91a628a_9407_496c_96fa_25985569f851.slice/crio-5178c7029a34a9f8a4812ae7f82df20f04db26bfb94f71fbce39291e875293b1 WatchSource:0}: Error finding container 5178c7029a34a9f8a4812ae7f82df20f04db26bfb94f71fbce39291e875293b1: Status 404 returned error can't find the container with id 5178c7029a34a9f8a4812ae7f82df20f04db26bfb94f71fbce39291e875293b1 Jan 05 23:20:48 crc kubenswrapper[5034]: I0105 23:20:48.810483 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlvmj" event={"ID":"e91a628a-9407-496c-96fa-25985569f851","Type":"ContainerStarted","Data":"5178c7029a34a9f8a4812ae7f82df20f04db26bfb94f71fbce39291e875293b1"} Jan 05 23:20:49 crc kubenswrapper[5034]: I0105 23:20:49.821501 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlvmj" event={"ID":"e91a628a-9407-496c-96fa-25985569f851","Type":"ContainerStarted","Data":"8fe8f2d0718d3a1afe9ea0b3d87048d7293976d30bcc51f1617d98261df539cb"} Jan 05 23:20:49 crc kubenswrapper[5034]: I0105 23:20:49.847415 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hlvmj" podStartSLOduration=2.847363135 podStartE2EDuration="2.847363135s" podCreationTimestamp="2026-01-05 23:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:49.836555669 +0000 UTC m=+5342.208555108" watchObservedRunningTime="2026-01-05 23:20:49.847363135 +0000 UTC m=+5342.219362594" Jan 05 23:20:49 crc kubenswrapper[5034]: I0105 23:20:49.848450 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca03b64a-e1d6-481f-9e90-cd2837d39acd" path="/var/lib/kubelet/pods/ca03b64a-e1d6-481f-9e90-cd2837d39acd/volumes" Jan 05 23:20:51 crc kubenswrapper[5034]: I0105 23:20:51.302421 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:20:51 crc kubenswrapper[5034]: I0105 23:20:51.373472 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d6d46bc59-vwttr"] Jan 05 23:20:51 crc kubenswrapper[5034]: I0105 23:20:51.373795 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" podUID="f96f655c-7c9b-4f0e-a83d-153fbed2d771" containerName="dnsmasq-dns" containerID="cri-o://6b99077d759d95bfe2ea9a5bca8c399631cdd895d2593f5157ab81360499c4b1" gracePeriod=10 Jan 05 23:20:51 crc kubenswrapper[5034]: I0105 23:20:51.413986 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" podUID="f96f655c-7c9b-4f0e-a83d-153fbed2d771" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.14:5353: connect: connection refused" Jan 05 23:20:51 crc kubenswrapper[5034]: I0105 23:20:51.846532 5034 generic.go:334] "Generic (PLEG): container finished" podID="f96f655c-7c9b-4f0e-a83d-153fbed2d771" containerID="6b99077d759d95bfe2ea9a5bca8c399631cdd895d2593f5157ab81360499c4b1" exitCode=0 Jan 05 23:20:51 crc kubenswrapper[5034]: I0105 23:20:51.851296 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" event={"ID":"f96f655c-7c9b-4f0e-a83d-153fbed2d771","Type":"ContainerDied","Data":"6b99077d759d95bfe2ea9a5bca8c399631cdd895d2593f5157ab81360499c4b1"} Jan 05 23:20:51 crc kubenswrapper[5034]: I0105 23:20:51.851339 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" event={"ID":"f96f655c-7c9b-4f0e-a83d-153fbed2d771","Type":"ContainerDied","Data":"74b95637d70f352219ca854f271d64365f7343482325df5e60264e834d6d86cb"} Jan 05 23:20:51 crc kubenswrapper[5034]: I0105 23:20:51.851351 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74b95637d70f352219ca854f271d64365f7343482325df5e60264e834d6d86cb" Jan 05 23:20:51 crc kubenswrapper[5034]: I0105 23:20:51.865838 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.003827 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-sb\") pod \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.003924 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-dns-svc\") pod \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.003981 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-nb\") pod \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.004038 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwvwx\" (UniqueName: \"kubernetes.io/projected/f96f655c-7c9b-4f0e-a83d-153fbed2d771-kube-api-access-bwvwx\") pod \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.004058 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-config\") pod \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\" (UID: \"f96f655c-7c9b-4f0e-a83d-153fbed2d771\") " Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.010331 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f96f655c-7c9b-4f0e-a83d-153fbed2d771-kube-api-access-bwvwx" (OuterVolumeSpecName: "kube-api-access-bwvwx") pod "f96f655c-7c9b-4f0e-a83d-153fbed2d771" (UID: "f96f655c-7c9b-4f0e-a83d-153fbed2d771"). InnerVolumeSpecName "kube-api-access-bwvwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.043225 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f96f655c-7c9b-4f0e-a83d-153fbed2d771" (UID: "f96f655c-7c9b-4f0e-a83d-153fbed2d771"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.043730 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-config" (OuterVolumeSpecName: "config") pod "f96f655c-7c9b-4f0e-a83d-153fbed2d771" (UID: "f96f655c-7c9b-4f0e-a83d-153fbed2d771"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.047116 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f96f655c-7c9b-4f0e-a83d-153fbed2d771" (UID: "f96f655c-7c9b-4f0e-a83d-153fbed2d771"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.048542 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f96f655c-7c9b-4f0e-a83d-153fbed2d771" (UID: "f96f655c-7c9b-4f0e-a83d-153fbed2d771"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.106290 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.106318 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.106327 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.106338 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwvwx\" (UniqueName: \"kubernetes.io/projected/f96f655c-7c9b-4f0e-a83d-153fbed2d771-kube-api-access-bwvwx\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.106349 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f96f655c-7c9b-4f0e-a83d-153fbed2d771-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.856592 5034 generic.go:334] "Generic (PLEG): container finished" podID="e91a628a-9407-496c-96fa-25985569f851" containerID="8fe8f2d0718d3a1afe9ea0b3d87048d7293976d30bcc51f1617d98261df539cb" exitCode=0 Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.856688 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d6d46bc59-vwttr" Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.856704 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlvmj" event={"ID":"e91a628a-9407-496c-96fa-25985569f851","Type":"ContainerDied","Data":"8fe8f2d0718d3a1afe9ea0b3d87048d7293976d30bcc51f1617d98261df539cb"} Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.915670 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d6d46bc59-vwttr"] Jan 05 23:20:52 crc kubenswrapper[5034]: I0105 23:20:52.922843 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d6d46bc59-vwttr"] Jan 05 23:20:53 crc kubenswrapper[5034]: I0105 23:20:53.850177 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f96f655c-7c9b-4f0e-a83d-153fbed2d771" path="/var/lib/kubelet/pods/f96f655c-7c9b-4f0e-a83d-153fbed2d771/volumes" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.249929 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.349002 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-config-data\") pod \"e91a628a-9407-496c-96fa-25985569f851\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.349102 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q8dn\" (UniqueName: \"kubernetes.io/projected/e91a628a-9407-496c-96fa-25985569f851-kube-api-access-8q8dn\") pod \"e91a628a-9407-496c-96fa-25985569f851\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.349155 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-credential-keys\") pod \"e91a628a-9407-496c-96fa-25985569f851\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.349265 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-scripts\") pod \"e91a628a-9407-496c-96fa-25985569f851\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.349299 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-fernet-keys\") pod \"e91a628a-9407-496c-96fa-25985569f851\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.349334 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-combined-ca-bundle\") pod \"e91a628a-9407-496c-96fa-25985569f851\" (UID: \"e91a628a-9407-496c-96fa-25985569f851\") " Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.358435 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91a628a-9407-496c-96fa-25985569f851-kube-api-access-8q8dn" (OuterVolumeSpecName: "kube-api-access-8q8dn") pod "e91a628a-9407-496c-96fa-25985569f851" (UID: "e91a628a-9407-496c-96fa-25985569f851"). InnerVolumeSpecName "kube-api-access-8q8dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.358792 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e91a628a-9407-496c-96fa-25985569f851" (UID: "e91a628a-9407-496c-96fa-25985569f851"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.360344 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-scripts" (OuterVolumeSpecName: "scripts") pod "e91a628a-9407-496c-96fa-25985569f851" (UID: "e91a628a-9407-496c-96fa-25985569f851"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.365043 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e91a628a-9407-496c-96fa-25985569f851" (UID: "e91a628a-9407-496c-96fa-25985569f851"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.380173 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-config-data" (OuterVolumeSpecName: "config-data") pod "e91a628a-9407-496c-96fa-25985569f851" (UID: "e91a628a-9407-496c-96fa-25985569f851"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.381059 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e91a628a-9407-496c-96fa-25985569f851" (UID: "e91a628a-9407-496c-96fa-25985569f851"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.451661 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.451701 5034 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.451712 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.451722 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.451730 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q8dn\" (UniqueName: \"kubernetes.io/projected/e91a628a-9407-496c-96fa-25985569f851-kube-api-access-8q8dn\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.451739 5034 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e91a628a-9407-496c-96fa-25985569f851-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.879921 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlvmj" event={"ID":"e91a628a-9407-496c-96fa-25985569f851","Type":"ContainerDied","Data":"5178c7029a34a9f8a4812ae7f82df20f04db26bfb94f71fbce39291e875293b1"} Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.879989 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5178c7029a34a9f8a4812ae7f82df20f04db26bfb94f71fbce39291e875293b1" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.880056 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlvmj" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.961965 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-64fbcf98c8-v62hn"] Jan 05 23:20:54 crc kubenswrapper[5034]: E0105 23:20:54.962744 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91a628a-9407-496c-96fa-25985569f851" containerName="keystone-bootstrap" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.962772 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91a628a-9407-496c-96fa-25985569f851" containerName="keystone-bootstrap" Jan 05 23:20:54 crc kubenswrapper[5034]: E0105 23:20:54.962811 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f96f655c-7c9b-4f0e-a83d-153fbed2d771" containerName="dnsmasq-dns" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.962823 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f96f655c-7c9b-4f0e-a83d-153fbed2d771" containerName="dnsmasq-dns" Jan 05 23:20:54 crc kubenswrapper[5034]: E0105 23:20:54.962845 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f96f655c-7c9b-4f0e-a83d-153fbed2d771" containerName="init" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.962854 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f96f655c-7c9b-4f0e-a83d-153fbed2d771" containerName="init" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.964692 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f96f655c-7c9b-4f0e-a83d-153fbed2d771" containerName="dnsmasq-dns" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.964724 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e91a628a-9407-496c-96fa-25985569f851" containerName="keystone-bootstrap" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.965645 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.968700 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.969260 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gj85k" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.969568 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.969728 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.969852 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.970035 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 23:20:54 crc kubenswrapper[5034]: I0105 23:20:54.982543 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-64fbcf98c8-v62hn"] Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.062899 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-scripts\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.062976 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-config-data\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.063008 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-fernet-keys\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.063042 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-credential-keys\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.063093 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-internal-tls-certs\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.063116 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzv8n\" (UniqueName: \"kubernetes.io/projected/34d5e0e7-e3e5-4315-9706-70605d6fdff5-kube-api-access-bzv8n\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.063145 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-combined-ca-bundle\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.063160 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-public-tls-certs\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.165448 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-internal-tls-certs\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.165552 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzv8n\" (UniqueName: \"kubernetes.io/projected/34d5e0e7-e3e5-4315-9706-70605d6fdff5-kube-api-access-bzv8n\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.166155 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-combined-ca-bundle\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.166648 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-public-tls-certs\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.166759 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-scripts\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.166818 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-config-data\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.166879 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-fernet-keys\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.166939 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-credential-keys\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.171370 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-internal-tls-certs\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.171459 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-scripts\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.171487 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-combined-ca-bundle\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.172458 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-public-tls-certs\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.178829 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-credential-keys\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.178863 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-fernet-keys\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.178954 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d5e0e7-e3e5-4315-9706-70605d6fdff5-config-data\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.183395 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzv8n\" (UniqueName: \"kubernetes.io/projected/34d5e0e7-e3e5-4315-9706-70605d6fdff5-kube-api-access-bzv8n\") pod \"keystone-64fbcf98c8-v62hn\" (UID: \"34d5e0e7-e3e5-4315-9706-70605d6fdff5\") " pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.335737 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.774171 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-64fbcf98c8-v62hn"] Jan 05 23:20:55 crc kubenswrapper[5034]: I0105 23:20:55.890991 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64fbcf98c8-v62hn" event={"ID":"34d5e0e7-e3e5-4315-9706-70605d6fdff5","Type":"ContainerStarted","Data":"8214422d51b79220b176671a9a262c1c47555e643b0cc1303f9c7895ef733e58"} Jan 05 23:20:56 crc kubenswrapper[5034]: I0105 23:20:56.905034 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64fbcf98c8-v62hn" event={"ID":"34d5e0e7-e3e5-4315-9706-70605d6fdff5","Type":"ContainerStarted","Data":"7c06edaefa8d5edd8275987b41093af6074df43b3ab7924b3e54652a2e82d8ec"} Jan 05 23:20:56 crc kubenswrapper[5034]: I0105 23:20:56.905415 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:20:56 crc kubenswrapper[5034]: I0105 23:20:56.925540 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-64fbcf98c8-v62hn" podStartSLOduration=2.92552071 podStartE2EDuration="2.92552071s" podCreationTimestamp="2026-01-05 23:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:20:56.924219943 +0000 UTC m=+5349.296219462" watchObservedRunningTime="2026-01-05 23:20:56.92552071 +0000 UTC m=+5349.297520159" Jan 05 23:20:58 crc kubenswrapper[5034]: I0105 23:20:58.714448 5034 scope.go:117] "RemoveContainer" containerID="bce11df8a6e6a9bf23d13a961707218f5ecaa5c2880b96f8ad80b1aa14b1502a" Jan 05 23:21:26 crc kubenswrapper[5034]: I0105 23:21:26.950407 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-64fbcf98c8-v62hn" Jan 05 23:21:29 crc kubenswrapper[5034]: I0105 23:21:29.750171 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 05 23:21:29 crc kubenswrapper[5034]: I0105 23:21:29.752155 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 23:21:29 crc kubenswrapper[5034]: I0105 23:21:29.755667 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 05 23:21:29 crc kubenswrapper[5034]: I0105 23:21:29.756142 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mv695" Jan 05 23:21:29 crc kubenswrapper[5034]: I0105 23:21:29.756324 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 05 23:21:29 crc kubenswrapper[5034]: I0105 23:21:29.777410 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 23:21:29 crc kubenswrapper[5034]: I0105 23:21:29.946502 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config-secret\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:29 crc kubenswrapper[5034]: I0105 23:21:29.946568 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:29 crc kubenswrapper[5034]: I0105 23:21:29.946619 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swhws\" (UniqueName: \"kubernetes.io/projected/6c9592dc-4604-488b-912a-b3ab5e11fd3b-kube-api-access-swhws\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:29 crc kubenswrapper[5034]: I0105 23:21:29.947330 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:30 crc kubenswrapper[5034]: I0105 23:21:30.049366 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:30 crc kubenswrapper[5034]: I0105 23:21:30.049527 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config-secret\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:30 crc kubenswrapper[5034]: I0105 23:21:30.049558 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:30 crc kubenswrapper[5034]: I0105 23:21:30.049605 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swhws\" (UniqueName: \"kubernetes.io/projected/6c9592dc-4604-488b-912a-b3ab5e11fd3b-kube-api-access-swhws\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:30 crc kubenswrapper[5034]: I0105 23:21:30.050477 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:30 crc kubenswrapper[5034]: I0105 23:21:30.058039 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config-secret\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:30 crc kubenswrapper[5034]: I0105 23:21:30.061666 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:30 crc kubenswrapper[5034]: I0105 23:21:30.067892 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swhws\" (UniqueName: \"kubernetes.io/projected/6c9592dc-4604-488b-912a-b3ab5e11fd3b-kube-api-access-swhws\") pod \"openstackclient\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " pod="openstack/openstackclient" Jan 05 23:21:30 crc kubenswrapper[5034]: I0105 23:21:30.072838 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 23:21:30 crc kubenswrapper[5034]: I0105 23:21:30.373026 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 23:21:31 crc kubenswrapper[5034]: I0105 23:21:31.257605 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6c9592dc-4604-488b-912a-b3ab5e11fd3b","Type":"ContainerStarted","Data":"45f0c24ea020f76522361b85eb4fb2f887f4666d1ae2898da6078be02686deb6"} Jan 05 23:21:31 crc kubenswrapper[5034]: I0105 23:21:31.257904 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6c9592dc-4604-488b-912a-b3ab5e11fd3b","Type":"ContainerStarted","Data":"5e1b48742f2dfcda8d92f9fe38e35badd543e75971dde6e30a5a18f689825817"} Jan 05 23:21:31 crc kubenswrapper[5034]: I0105 23:21:31.281650 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.281623059 podStartE2EDuration="2.281623059s" podCreationTimestamp="2026-01-05 23:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:21:31.277581465 +0000 UTC m=+5383.649580944" watchObservedRunningTime="2026-01-05 23:21:31.281623059 +0000 UTC m=+5383.653622508" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.190958 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4spsc"] Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.194795 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.208054 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4spsc"] Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.271002 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-utilities\") pod \"certified-operators-4spsc\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.271067 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cjgv\" (UniqueName: \"kubernetes.io/projected/37a94026-a496-471b-9e42-a49b515d7d3e-kube-api-access-6cjgv\") pod \"certified-operators-4spsc\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.271255 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-catalog-content\") pod \"certified-operators-4spsc\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.375634 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-utilities\") pod \"certified-operators-4spsc\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.375781 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cjgv\" (UniqueName: \"kubernetes.io/projected/37a94026-a496-471b-9e42-a49b515d7d3e-kube-api-access-6cjgv\") pod \"certified-operators-4spsc\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.375940 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-catalog-content\") pod \"certified-operators-4spsc\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.376560 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-catalog-content\") pod \"certified-operators-4spsc\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.376762 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-utilities\") pod \"certified-operators-4spsc\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.413522 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cjgv\" (UniqueName: \"kubernetes.io/projected/37a94026-a496-471b-9e42-a49b515d7d3e-kube-api-access-6cjgv\") pod \"certified-operators-4spsc\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:41 crc kubenswrapper[5034]: I0105 23:21:41.526858 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:42 crc kubenswrapper[5034]: I0105 23:21:42.035251 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4spsc"] Jan 05 23:21:42 crc kubenswrapper[5034]: I0105 23:21:42.361005 5034 generic.go:334] "Generic (PLEG): container finished" podID="37a94026-a496-471b-9e42-a49b515d7d3e" containerID="a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca" exitCode=0 Jan 05 23:21:42 crc kubenswrapper[5034]: I0105 23:21:42.361189 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spsc" event={"ID":"37a94026-a496-471b-9e42-a49b515d7d3e","Type":"ContainerDied","Data":"a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca"} Jan 05 23:21:42 crc kubenswrapper[5034]: I0105 23:21:42.361386 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spsc" event={"ID":"37a94026-a496-471b-9e42-a49b515d7d3e","Type":"ContainerStarted","Data":"0a9415b438c0f68cd1cbee7d4b41952afd59b72e1bbd4c16b67c7dcbeb5ead84"} Jan 05 23:21:42 crc kubenswrapper[5034]: I0105 23:21:42.364312 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 23:21:43 crc kubenswrapper[5034]: I0105 23:21:43.378981 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spsc" event={"ID":"37a94026-a496-471b-9e42-a49b515d7d3e","Type":"ContainerStarted","Data":"4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104"} Jan 05 23:21:44 crc kubenswrapper[5034]: I0105 23:21:44.391006 5034 generic.go:334] "Generic (PLEG): container finished" podID="37a94026-a496-471b-9e42-a49b515d7d3e" containerID="4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104" exitCode=0 Jan 05 23:21:44 crc kubenswrapper[5034]: I0105 23:21:44.391104 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spsc" event={"ID":"37a94026-a496-471b-9e42-a49b515d7d3e","Type":"ContainerDied","Data":"4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104"} Jan 05 23:21:45 crc kubenswrapper[5034]: I0105 23:21:45.404165 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spsc" event={"ID":"37a94026-a496-471b-9e42-a49b515d7d3e","Type":"ContainerStarted","Data":"414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b"} Jan 05 23:21:45 crc kubenswrapper[5034]: I0105 23:21:45.436050 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4spsc" podStartSLOduration=1.967488768 podStartE2EDuration="4.436017723s" podCreationTimestamp="2026-01-05 23:21:41 +0000 UTC" firstStartedPulling="2026-01-05 23:21:42.363844777 +0000 UTC m=+5394.735844256" lastFinishedPulling="2026-01-05 23:21:44.832373772 +0000 UTC m=+5397.204373211" observedRunningTime="2026-01-05 23:21:45.429788696 +0000 UTC m=+5397.801788165" watchObservedRunningTime="2026-01-05 23:21:45.436017723 +0000 UTC m=+5397.808017192" Jan 05 23:21:50 crc kubenswrapper[5034]: I0105 23:21:50.469562 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:21:50 crc kubenswrapper[5034]: I0105 23:21:50.470580 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:21:51 crc kubenswrapper[5034]: I0105 23:21:51.527689 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:51 crc kubenswrapper[5034]: I0105 23:21:51.527899 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:51 crc kubenswrapper[5034]: I0105 23:21:51.600483 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:52 crc kubenswrapper[5034]: I0105 23:21:52.525292 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:52 crc kubenswrapper[5034]: I0105 23:21:52.582803 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4spsc"] Jan 05 23:21:54 crc kubenswrapper[5034]: I0105 23:21:54.489161 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4spsc" podUID="37a94026-a496-471b-9e42-a49b515d7d3e" containerName="registry-server" containerID="cri-o://414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b" gracePeriod=2 Jan 05 23:21:54 crc kubenswrapper[5034]: I0105 23:21:54.959996 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:54 crc kubenswrapper[5034]: I0105 23:21:54.968448 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cjgv\" (UniqueName: \"kubernetes.io/projected/37a94026-a496-471b-9e42-a49b515d7d3e-kube-api-access-6cjgv\") pod \"37a94026-a496-471b-9e42-a49b515d7d3e\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " Jan 05 23:21:54 crc kubenswrapper[5034]: I0105 23:21:54.968519 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-catalog-content\") pod \"37a94026-a496-471b-9e42-a49b515d7d3e\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " Jan 05 23:21:54 crc kubenswrapper[5034]: I0105 23:21:54.968547 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-utilities\") pod \"37a94026-a496-471b-9e42-a49b515d7d3e\" (UID: \"37a94026-a496-471b-9e42-a49b515d7d3e\") " Jan 05 23:21:54 crc kubenswrapper[5034]: I0105 23:21:54.969699 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-utilities" (OuterVolumeSpecName: "utilities") pod "37a94026-a496-471b-9e42-a49b515d7d3e" (UID: "37a94026-a496-471b-9e42-a49b515d7d3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:21:54 crc kubenswrapper[5034]: I0105 23:21:54.977092 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a94026-a496-471b-9e42-a49b515d7d3e-kube-api-access-6cjgv" (OuterVolumeSpecName: "kube-api-access-6cjgv") pod "37a94026-a496-471b-9e42-a49b515d7d3e" (UID: "37a94026-a496-471b-9e42-a49b515d7d3e"). InnerVolumeSpecName "kube-api-access-6cjgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.017357 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37a94026-a496-471b-9e42-a49b515d7d3e" (UID: "37a94026-a496-471b-9e42-a49b515d7d3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.070603 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.070652 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a94026-a496-471b-9e42-a49b515d7d3e-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.070664 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cjgv\" (UniqueName: \"kubernetes.io/projected/37a94026-a496-471b-9e42-a49b515d7d3e-kube-api-access-6cjgv\") on node \"crc\" DevicePath \"\"" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.501761 5034 generic.go:334] "Generic (PLEG): container finished" podID="37a94026-a496-471b-9e42-a49b515d7d3e" containerID="414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b" exitCode=0 Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.501848 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spsc" event={"ID":"37a94026-a496-471b-9e42-a49b515d7d3e","Type":"ContainerDied","Data":"414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b"} Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.501902 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4spsc" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.502882 5034 scope.go:117] "RemoveContainer" containerID="414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.502857 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spsc" event={"ID":"37a94026-a496-471b-9e42-a49b515d7d3e","Type":"ContainerDied","Data":"0a9415b438c0f68cd1cbee7d4b41952afd59b72e1bbd4c16b67c7dcbeb5ead84"} Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.530635 5034 scope.go:117] "RemoveContainer" containerID="4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.547038 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4spsc"] Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.556046 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4spsc"] Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.597323 5034 scope.go:117] "RemoveContainer" containerID="a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.625866 5034 scope.go:117] "RemoveContainer" containerID="414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b" Jan 05 23:21:55 crc kubenswrapper[5034]: E0105 23:21:55.626588 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b\": container with ID starting with 414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b not found: ID does not exist" containerID="414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.626646 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b"} err="failed to get container status \"414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b\": rpc error: code = NotFound desc = could not find container \"414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b\": container with ID starting with 414d34299c21941f9c17f82aef012a72f826693ec07d8fae808ce282aa0caa0b not found: ID does not exist" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.626682 5034 scope.go:117] "RemoveContainer" containerID="4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104" Jan 05 23:21:55 crc kubenswrapper[5034]: E0105 23:21:55.627730 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104\": container with ID starting with 4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104 not found: ID does not exist" containerID="4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.627820 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104"} err="failed to get container status \"4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104\": rpc error: code = NotFound desc = could not find container \"4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104\": container with ID starting with 4c9494b1644670a5d0919f9cd49df7ee4d4e89b7f5e459b4e420bb6a6d1ec104 not found: ID does not exist" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.628107 5034 scope.go:117] "RemoveContainer" containerID="a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca" Jan 05 23:21:55 crc kubenswrapper[5034]: E0105 23:21:55.628780 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca\": container with ID starting with a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca not found: ID does not exist" containerID="a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.628815 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca"} err="failed to get container status \"a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca\": rpc error: code = NotFound desc = could not find container \"a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca\": container with ID starting with a339e16064dac200c69a7854061aefbfb32a2206826406ee10e49e53790ca5ca not found: ID does not exist" Jan 05 23:21:55 crc kubenswrapper[5034]: I0105 23:21:55.855022 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a94026-a496-471b-9e42-a49b515d7d3e" path="/var/lib/kubelet/pods/37a94026-a496-471b-9e42-a49b515d7d3e/volumes" Jan 05 23:21:58 crc kubenswrapper[5034]: I0105 23:21:58.795627 5034 scope.go:117] "RemoveContainer" containerID="b2469943207d24949e1f390b8062a8300a3e114b02c1bf1865affd11ffdb6293" Jan 05 23:21:58 crc kubenswrapper[5034]: I0105 23:21:58.816557 5034 scope.go:117] "RemoveContainer" containerID="14e897b4f031d677606f0a43622da174f70d25a5a9bbc5668cb46c6eab385a96" Jan 05 23:22:20 crc kubenswrapper[5034]: I0105 23:22:20.469643 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:22:20 crc kubenswrapper[5034]: I0105 23:22:20.471575 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:22:50 crc kubenswrapper[5034]: I0105 23:22:50.468785 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:22:50 crc kubenswrapper[5034]: I0105 23:22:50.469342 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:22:50 crc kubenswrapper[5034]: I0105 23:22:50.469392 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 23:22:50 crc kubenswrapper[5034]: I0105 23:22:50.470139 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f74e12195f4a4167262b994444528ef434816fe2ad847b01f18b8380f45c84a2"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 23:22:50 crc kubenswrapper[5034]: I0105 23:22:50.470192 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://f74e12195f4a4167262b994444528ef434816fe2ad847b01f18b8380f45c84a2" gracePeriod=600 Jan 05 23:22:51 crc kubenswrapper[5034]: I0105 23:22:51.020993 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="f74e12195f4a4167262b994444528ef434816fe2ad847b01f18b8380f45c84a2" exitCode=0 Jan 05 23:22:51 crc kubenswrapper[5034]: I0105 23:22:51.021105 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"f74e12195f4a4167262b994444528ef434816fe2ad847b01f18b8380f45c84a2"} Jan 05 23:22:51 crc kubenswrapper[5034]: I0105 23:22:51.021559 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef"} Jan 05 23:22:51 crc kubenswrapper[5034]: I0105 23:22:51.021586 5034 scope.go:117] "RemoveContainer" containerID="8db2f6775ac127504b2a77b14d3fe24b1750cd45d14435dacadd0917b3ba642f" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.544440 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8t7cn"] Jan 05 23:22:55 crc kubenswrapper[5034]: E0105 23:22:55.545179 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a94026-a496-471b-9e42-a49b515d7d3e" containerName="registry-server" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.545198 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a94026-a496-471b-9e42-a49b515d7d3e" containerName="registry-server" Jan 05 23:22:55 crc kubenswrapper[5034]: E0105 23:22:55.545214 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a94026-a496-471b-9e42-a49b515d7d3e" containerName="extract-content" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.545221 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a94026-a496-471b-9e42-a49b515d7d3e" containerName="extract-content" Jan 05 23:22:55 crc kubenswrapper[5034]: E0105 23:22:55.545236 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a94026-a496-471b-9e42-a49b515d7d3e" containerName="extract-utilities" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.545243 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a94026-a496-471b-9e42-a49b515d7d3e" containerName="extract-utilities" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.545424 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a94026-a496-471b-9e42-a49b515d7d3e" containerName="registry-server" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.546762 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.555900 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8t7cn"] Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.719792 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-catalog-content\") pod \"redhat-marketplace-8t7cn\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.719841 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l5cn\" (UniqueName: \"kubernetes.io/projected/55053be5-68cd-4eac-8d7e-3811e9047fc7-kube-api-access-4l5cn\") pod \"redhat-marketplace-8t7cn\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.719922 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-utilities\") pod \"redhat-marketplace-8t7cn\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.822167 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l5cn\" (UniqueName: \"kubernetes.io/projected/55053be5-68cd-4eac-8d7e-3811e9047fc7-kube-api-access-4l5cn\") pod \"redhat-marketplace-8t7cn\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.822217 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-catalog-content\") pod \"redhat-marketplace-8t7cn\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.822277 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-utilities\") pod \"redhat-marketplace-8t7cn\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.822792 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-utilities\") pod \"redhat-marketplace-8t7cn\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.823013 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-catalog-content\") pod \"redhat-marketplace-8t7cn\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.853255 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l5cn\" (UniqueName: \"kubernetes.io/projected/55053be5-68cd-4eac-8d7e-3811e9047fc7-kube-api-access-4l5cn\") pod \"redhat-marketplace-8t7cn\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:55 crc kubenswrapper[5034]: I0105 23:22:55.911269 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:22:56 crc kubenswrapper[5034]: I0105 23:22:56.374554 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8t7cn"] Jan 05 23:22:57 crc kubenswrapper[5034]: I0105 23:22:57.080231 5034 generic.go:334] "Generic (PLEG): container finished" podID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerID="1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6" exitCode=0 Jan 05 23:22:57 crc kubenswrapper[5034]: I0105 23:22:57.080339 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t7cn" event={"ID":"55053be5-68cd-4eac-8d7e-3811e9047fc7","Type":"ContainerDied","Data":"1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6"} Jan 05 23:22:57 crc kubenswrapper[5034]: I0105 23:22:57.080630 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t7cn" event={"ID":"55053be5-68cd-4eac-8d7e-3811e9047fc7","Type":"ContainerStarted","Data":"a6a2dadc9c17d056b7db47dadf8f56c01bfafb4f91d7d230ed134c76bba5cb46"} Jan 05 23:22:58 crc kubenswrapper[5034]: I0105 23:22:58.095694 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t7cn" event={"ID":"55053be5-68cd-4eac-8d7e-3811e9047fc7","Type":"ContainerStarted","Data":"384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b"} Jan 05 23:22:58 crc kubenswrapper[5034]: I0105 23:22:58.905751 5034 scope.go:117] "RemoveContainer" containerID="d498e9c7a6b564ccfdd023877ccf32773ff6ca0e020971f96008010bc8a589b4" Jan 05 23:22:58 crc kubenswrapper[5034]: I0105 23:22:58.942103 5034 scope.go:117] "RemoveContainer" containerID="4b1b1cf0dc47123b2a4de4911aa4b46bdfba020a169d383096633fc365328be3" Jan 05 23:22:58 crc kubenswrapper[5034]: I0105 23:22:58.968880 5034 scope.go:117] "RemoveContainer" containerID="53a0409ee5c090428d20604d4c0250cb18c472e2fb758834b5f1b264a5a1adad" Jan 05 23:22:59 crc kubenswrapper[5034]: I0105 23:22:59.004791 5034 scope.go:117] "RemoveContainer" containerID="52b7231ba46d22ae0b8e17b51d8aff5a1462b5d43111674dc9a2afcefd8ae7eb" Jan 05 23:22:59 crc kubenswrapper[5034]: I0105 23:22:59.058584 5034 scope.go:117] "RemoveContainer" containerID="9a98ed559da3cedf9683e385596d8a4e49eeacf36603ce804050bcae36f0a0b1" Jan 05 23:22:59 crc kubenswrapper[5034]: I0105 23:22:59.105458 5034 scope.go:117] "RemoveContainer" containerID="856634d8cf74ebde242c272d9038d385ac178182c9251bd1ee70ae17e41cef64" Jan 05 23:22:59 crc kubenswrapper[5034]: I0105 23:22:59.162773 5034 generic.go:334] "Generic (PLEG): container finished" podID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerID="384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b" exitCode=0 Jan 05 23:22:59 crc kubenswrapper[5034]: I0105 23:22:59.163291 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t7cn" event={"ID":"55053be5-68cd-4eac-8d7e-3811e9047fc7","Type":"ContainerDied","Data":"384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b"} Jan 05 23:22:59 crc kubenswrapper[5034]: I0105 23:22:59.172912 5034 scope.go:117] "RemoveContainer" containerID="750a5ca0828472b5d82d971c7c84eabd5cc6e2784961cb3db6d5413bbefd51ee" Jan 05 23:23:00 crc kubenswrapper[5034]: I0105 23:23:00.184539 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t7cn" event={"ID":"55053be5-68cd-4eac-8d7e-3811e9047fc7","Type":"ContainerStarted","Data":"78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5"} Jan 05 23:23:00 crc kubenswrapper[5034]: I0105 23:23:00.209563 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8t7cn" podStartSLOduration=2.689044714 podStartE2EDuration="5.209539744s" podCreationTimestamp="2026-01-05 23:22:55 +0000 UTC" firstStartedPulling="2026-01-05 23:22:57.082561152 +0000 UTC m=+5469.454560591" lastFinishedPulling="2026-01-05 23:22:59.603056182 +0000 UTC m=+5471.975055621" observedRunningTime="2026-01-05 23:23:00.206349824 +0000 UTC m=+5472.578349263" watchObservedRunningTime="2026-01-05 23:23:00.209539744 +0000 UTC m=+5472.581539183" Jan 05 23:23:05 crc kubenswrapper[5034]: I0105 23:23:05.912038 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:23:05 crc kubenswrapper[5034]: I0105 23:23:05.912806 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:23:05 crc kubenswrapper[5034]: I0105 23:23:05.958943 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:23:06 crc kubenswrapper[5034]: I0105 23:23:06.297928 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:23:06 crc kubenswrapper[5034]: I0105 23:23:06.355491 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8t7cn"] Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.265960 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8t7cn" podUID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerName="registry-server" containerID="cri-o://78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5" gracePeriod=2 Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.697511 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.782802 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-catalog-content\") pod \"55053be5-68cd-4eac-8d7e-3811e9047fc7\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.783012 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-utilities\") pod \"55053be5-68cd-4eac-8d7e-3811e9047fc7\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.783068 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l5cn\" (UniqueName: \"kubernetes.io/projected/55053be5-68cd-4eac-8d7e-3811e9047fc7-kube-api-access-4l5cn\") pod \"55053be5-68cd-4eac-8d7e-3811e9047fc7\" (UID: \"55053be5-68cd-4eac-8d7e-3811e9047fc7\") " Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.786796 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-utilities" (OuterVolumeSpecName: "utilities") pod "55053be5-68cd-4eac-8d7e-3811e9047fc7" (UID: "55053be5-68cd-4eac-8d7e-3811e9047fc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.794446 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55053be5-68cd-4eac-8d7e-3811e9047fc7-kube-api-access-4l5cn" (OuterVolumeSpecName: "kube-api-access-4l5cn") pod "55053be5-68cd-4eac-8d7e-3811e9047fc7" (UID: "55053be5-68cd-4eac-8d7e-3811e9047fc7"). InnerVolumeSpecName "kube-api-access-4l5cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.897258 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.897306 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l5cn\" (UniqueName: \"kubernetes.io/projected/55053be5-68cd-4eac-8d7e-3811e9047fc7-kube-api-access-4l5cn\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.918012 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55053be5-68cd-4eac-8d7e-3811e9047fc7" (UID: "55053be5-68cd-4eac-8d7e-3811e9047fc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:23:08 crc kubenswrapper[5034]: I0105 23:23:08.998982 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55053be5-68cd-4eac-8d7e-3811e9047fc7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.276457 5034 generic.go:334] "Generic (PLEG): container finished" podID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerID="78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5" exitCode=0 Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.276500 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t7cn" event={"ID":"55053be5-68cd-4eac-8d7e-3811e9047fc7","Type":"ContainerDied","Data":"78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5"} Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.276530 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t7cn" event={"ID":"55053be5-68cd-4eac-8d7e-3811e9047fc7","Type":"ContainerDied","Data":"a6a2dadc9c17d056b7db47dadf8f56c01bfafb4f91d7d230ed134c76bba5cb46"} Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.276551 5034 scope.go:117] "RemoveContainer" containerID="78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.276560 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8t7cn" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.301363 5034 scope.go:117] "RemoveContainer" containerID="384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.309735 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8t7cn"] Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.318763 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8t7cn"] Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.333072 5034 scope.go:117] "RemoveContainer" containerID="1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.365671 5034 scope.go:117] "RemoveContainer" containerID="78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5" Jan 05 23:23:09 crc kubenswrapper[5034]: E0105 23:23:09.366443 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5\": container with ID starting with 78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5 not found: ID does not exist" containerID="78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.366555 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5"} err="failed to get container status \"78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5\": rpc error: code = NotFound desc = could not find container \"78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5\": container with ID starting with 78adf92c495a89273d120dd39130e0cc408f80d3b83c0007c1a3c3aaaa7f7fe5 not found: ID does not exist" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.366596 5034 scope.go:117] "RemoveContainer" containerID="384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b" Jan 05 23:23:09 crc kubenswrapper[5034]: E0105 23:23:09.367006 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b\": container with ID starting with 384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b not found: ID does not exist" containerID="384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.367040 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b"} err="failed to get container status \"384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b\": rpc error: code = NotFound desc = could not find container \"384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b\": container with ID starting with 384ac9ec09ef0a6e732079300515a019d1e7b37ccc1ad67450600b1c8bdd001b not found: ID does not exist" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.367065 5034 scope.go:117] "RemoveContainer" containerID="1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6" Jan 05 23:23:09 crc kubenswrapper[5034]: E0105 23:23:09.367331 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6\": container with ID starting with 1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6 not found: ID does not exist" containerID="1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.367353 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6"} err="failed to get container status \"1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6\": rpc error: code = NotFound desc = could not find container \"1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6\": container with ID starting with 1321d6f32b2081d1e21b035befb4d14aa3c3edb77b97c71e793ebe8b5fe220b6 not found: ID does not exist" Jan 05 23:23:09 crc kubenswrapper[5034]: I0105 23:23:09.848634 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55053be5-68cd-4eac-8d7e-3811e9047fc7" path="/var/lib/kubelet/pods/55053be5-68cd-4eac-8d7e-3811e9047fc7/volumes" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.276335 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-qjh2z"] Jan 05 23:23:11 crc kubenswrapper[5034]: E0105 23:23:11.278209 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerName="registry-server" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.278314 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerName="registry-server" Jan 05 23:23:11 crc kubenswrapper[5034]: E0105 23:23:11.278386 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerName="extract-content" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.278446 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerName="extract-content" Jan 05 23:23:11 crc kubenswrapper[5034]: E0105 23:23:11.278501 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerName="extract-utilities" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.278556 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerName="extract-utilities" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.278780 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="55053be5-68cd-4eac-8d7e-3811e9047fc7" containerName="registry-server" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.279743 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qjh2z" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.285331 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f936-account-create-update-scbns"] Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.286745 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f936-account-create-update-scbns" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.291669 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.303188 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f936-account-create-update-scbns"] Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.311527 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-qjh2z"] Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.443944 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnpd6\" (UniqueName: \"kubernetes.io/projected/57dc19d8-9194-437d-94a7-191f3a731c2e-kube-api-access-qnpd6\") pod \"barbican-f936-account-create-update-scbns\" (UID: \"57dc19d8-9194-437d-94a7-191f3a731c2e\") " pod="openstack/barbican-f936-account-create-update-scbns" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.444018 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57dc19d8-9194-437d-94a7-191f3a731c2e-operator-scripts\") pod \"barbican-f936-account-create-update-scbns\" (UID: \"57dc19d8-9194-437d-94a7-191f3a731c2e\") " pod="openstack/barbican-f936-account-create-update-scbns" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.444218 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-operator-scripts\") pod \"barbican-db-create-qjh2z\" (UID: \"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f\") " pod="openstack/barbican-db-create-qjh2z" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.444441 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28thv\" (UniqueName: \"kubernetes.io/projected/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-kube-api-access-28thv\") pod \"barbican-db-create-qjh2z\" (UID: \"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f\") " pod="openstack/barbican-db-create-qjh2z" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.545983 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnpd6\" (UniqueName: \"kubernetes.io/projected/57dc19d8-9194-437d-94a7-191f3a731c2e-kube-api-access-qnpd6\") pod \"barbican-f936-account-create-update-scbns\" (UID: \"57dc19d8-9194-437d-94a7-191f3a731c2e\") " pod="openstack/barbican-f936-account-create-update-scbns" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.546043 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57dc19d8-9194-437d-94a7-191f3a731c2e-operator-scripts\") pod \"barbican-f936-account-create-update-scbns\" (UID: \"57dc19d8-9194-437d-94a7-191f3a731c2e\") " pod="openstack/barbican-f936-account-create-update-scbns" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.546116 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-operator-scripts\") pod \"barbican-db-create-qjh2z\" (UID: \"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f\") " pod="openstack/barbican-db-create-qjh2z" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.546183 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28thv\" (UniqueName: \"kubernetes.io/projected/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-kube-api-access-28thv\") pod \"barbican-db-create-qjh2z\" (UID: \"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f\") " pod="openstack/barbican-db-create-qjh2z" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.547023 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-operator-scripts\") pod \"barbican-db-create-qjh2z\" (UID: \"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f\") " pod="openstack/barbican-db-create-qjh2z" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.547267 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57dc19d8-9194-437d-94a7-191f3a731c2e-operator-scripts\") pod \"barbican-f936-account-create-update-scbns\" (UID: \"57dc19d8-9194-437d-94a7-191f3a731c2e\") " pod="openstack/barbican-f936-account-create-update-scbns" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.566460 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnpd6\" (UniqueName: \"kubernetes.io/projected/57dc19d8-9194-437d-94a7-191f3a731c2e-kube-api-access-qnpd6\") pod \"barbican-f936-account-create-update-scbns\" (UID: \"57dc19d8-9194-437d-94a7-191f3a731c2e\") " pod="openstack/barbican-f936-account-create-update-scbns" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.573387 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28thv\" (UniqueName: \"kubernetes.io/projected/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-kube-api-access-28thv\") pod \"barbican-db-create-qjh2z\" (UID: \"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f\") " pod="openstack/barbican-db-create-qjh2z" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.600772 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qjh2z" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.609914 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f936-account-create-update-scbns" Jan 05 23:23:11 crc kubenswrapper[5034]: I0105 23:23:11.888420 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f936-account-create-update-scbns"] Jan 05 23:23:12 crc kubenswrapper[5034]: I0105 23:23:12.038872 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-qjh2z"] Jan 05 23:23:12 crc kubenswrapper[5034]: W0105 23:23:12.042215 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6bb3fef_f9db_4ca3_a4f3_bfee9662d80f.slice/crio-486890a522f7115b0e447051bf9423f740e1c1e757061f16125f1bee7c871b02 WatchSource:0}: Error finding container 486890a522f7115b0e447051bf9423f740e1c1e757061f16125f1bee7c871b02: Status 404 returned error can't find the container with id 486890a522f7115b0e447051bf9423f740e1c1e757061f16125f1bee7c871b02 Jan 05 23:23:12 crc kubenswrapper[5034]: I0105 23:23:12.321676 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qjh2z" event={"ID":"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f","Type":"ContainerStarted","Data":"525b566f1e35e56f23580f1607e00e020f86aa5c4d9a03c9c828fccf88561349"} Jan 05 23:23:12 crc kubenswrapper[5034]: I0105 23:23:12.321732 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qjh2z" event={"ID":"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f","Type":"ContainerStarted","Data":"486890a522f7115b0e447051bf9423f740e1c1e757061f16125f1bee7c871b02"} Jan 05 23:23:12 crc kubenswrapper[5034]: I0105 23:23:12.324442 5034 generic.go:334] "Generic (PLEG): container finished" podID="57dc19d8-9194-437d-94a7-191f3a731c2e" containerID="4cd0e45466d381cb45f9a6e10b33dcc2103bde34ddad1318b2b070a8b7ecf005" exitCode=0 Jan 05 23:23:12 crc kubenswrapper[5034]: I0105 23:23:12.324479 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f936-account-create-update-scbns" event={"ID":"57dc19d8-9194-437d-94a7-191f3a731c2e","Type":"ContainerDied","Data":"4cd0e45466d381cb45f9a6e10b33dcc2103bde34ddad1318b2b070a8b7ecf005"} Jan 05 23:23:12 crc kubenswrapper[5034]: I0105 23:23:12.324501 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f936-account-create-update-scbns" event={"ID":"57dc19d8-9194-437d-94a7-191f3a731c2e","Type":"ContainerStarted","Data":"8311639c8b28facf94abea23ef8b1d3407878473b19db360deceb70ae6322f34"} Jan 05 23:23:12 crc kubenswrapper[5034]: I0105 23:23:12.363182 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-qjh2z" podStartSLOduration=1.363157236 podStartE2EDuration="1.363157236s" podCreationTimestamp="2026-01-05 23:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:23:12.343738375 +0000 UTC m=+5484.715737814" watchObservedRunningTime="2026-01-05 23:23:12.363157236 +0000 UTC m=+5484.735156675" Jan 05 23:23:13 crc kubenswrapper[5034]: I0105 23:23:13.332959 5034 generic.go:334] "Generic (PLEG): container finished" podID="a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f" containerID="525b566f1e35e56f23580f1607e00e020f86aa5c4d9a03c9c828fccf88561349" exitCode=0 Jan 05 23:23:13 crc kubenswrapper[5034]: I0105 23:23:13.334070 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qjh2z" event={"ID":"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f","Type":"ContainerDied","Data":"525b566f1e35e56f23580f1607e00e020f86aa5c4d9a03c9c828fccf88561349"} Jan 05 23:23:13 crc kubenswrapper[5034]: I0105 23:23:13.703036 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f936-account-create-update-scbns" Jan 05 23:23:13 crc kubenswrapper[5034]: I0105 23:23:13.791052 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57dc19d8-9194-437d-94a7-191f3a731c2e-operator-scripts\") pod \"57dc19d8-9194-437d-94a7-191f3a731c2e\" (UID: \"57dc19d8-9194-437d-94a7-191f3a731c2e\") " Jan 05 23:23:13 crc kubenswrapper[5034]: I0105 23:23:13.791172 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnpd6\" (UniqueName: \"kubernetes.io/projected/57dc19d8-9194-437d-94a7-191f3a731c2e-kube-api-access-qnpd6\") pod \"57dc19d8-9194-437d-94a7-191f3a731c2e\" (UID: \"57dc19d8-9194-437d-94a7-191f3a731c2e\") " Jan 05 23:23:13 crc kubenswrapper[5034]: I0105 23:23:13.793106 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57dc19d8-9194-437d-94a7-191f3a731c2e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57dc19d8-9194-437d-94a7-191f3a731c2e" (UID: "57dc19d8-9194-437d-94a7-191f3a731c2e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:23:13 crc kubenswrapper[5034]: I0105 23:23:13.804714 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57dc19d8-9194-437d-94a7-191f3a731c2e-kube-api-access-qnpd6" (OuterVolumeSpecName: "kube-api-access-qnpd6") pod "57dc19d8-9194-437d-94a7-191f3a731c2e" (UID: "57dc19d8-9194-437d-94a7-191f3a731c2e"). InnerVolumeSpecName "kube-api-access-qnpd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:23:13 crc kubenswrapper[5034]: I0105 23:23:13.893597 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57dc19d8-9194-437d-94a7-191f3a731c2e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:13 crc kubenswrapper[5034]: I0105 23:23:13.893971 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnpd6\" (UniqueName: \"kubernetes.io/projected/57dc19d8-9194-437d-94a7-191f3a731c2e-kube-api-access-qnpd6\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:14 crc kubenswrapper[5034]: I0105 23:23:14.344170 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f936-account-create-update-scbns" event={"ID":"57dc19d8-9194-437d-94a7-191f3a731c2e","Type":"ContainerDied","Data":"8311639c8b28facf94abea23ef8b1d3407878473b19db360deceb70ae6322f34"} Jan 05 23:23:14 crc kubenswrapper[5034]: I0105 23:23:14.344218 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8311639c8b28facf94abea23ef8b1d3407878473b19db360deceb70ae6322f34" Jan 05 23:23:14 crc kubenswrapper[5034]: I0105 23:23:14.344303 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f936-account-create-update-scbns" Jan 05 23:23:14 crc kubenswrapper[5034]: I0105 23:23:14.669194 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qjh2z" Jan 05 23:23:14 crc kubenswrapper[5034]: I0105 23:23:14.811685 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28thv\" (UniqueName: \"kubernetes.io/projected/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-kube-api-access-28thv\") pod \"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f\" (UID: \"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f\") " Jan 05 23:23:14 crc kubenswrapper[5034]: I0105 23:23:14.812096 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-operator-scripts\") pod \"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f\" (UID: \"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f\") " Jan 05 23:23:14 crc kubenswrapper[5034]: I0105 23:23:14.812584 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f" (UID: "a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:23:14 crc kubenswrapper[5034]: I0105 23:23:14.812738 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:14 crc kubenswrapper[5034]: I0105 23:23:14.826904 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-kube-api-access-28thv" (OuterVolumeSpecName: "kube-api-access-28thv") pod "a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f" (UID: "a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f"). InnerVolumeSpecName "kube-api-access-28thv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:23:14 crc kubenswrapper[5034]: I0105 23:23:14.914839 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28thv\" (UniqueName: \"kubernetes.io/projected/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f-kube-api-access-28thv\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:15 crc kubenswrapper[5034]: I0105 23:23:15.352032 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qjh2z" event={"ID":"a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f","Type":"ContainerDied","Data":"486890a522f7115b0e447051bf9423f740e1c1e757061f16125f1bee7c871b02"} Jan 05 23:23:15 crc kubenswrapper[5034]: I0105 23:23:15.352125 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="486890a522f7115b0e447051bf9423f740e1c1e757061f16125f1bee7c871b02" Jan 05 23:23:15 crc kubenswrapper[5034]: I0105 23:23:15.352134 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qjh2z" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.548445 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-l6bzc"] Jan 05 23:23:16 crc kubenswrapper[5034]: E0105 23:23:16.549058 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57dc19d8-9194-437d-94a7-191f3a731c2e" containerName="mariadb-account-create-update" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.549163 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="57dc19d8-9194-437d-94a7-191f3a731c2e" containerName="mariadb-account-create-update" Jan 05 23:23:16 crc kubenswrapper[5034]: E0105 23:23:16.549198 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f" containerName="mariadb-database-create" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.549207 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f" containerName="mariadb-database-create" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.549430 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="57dc19d8-9194-437d-94a7-191f3a731c2e" containerName="mariadb-account-create-update" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.549457 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f" containerName="mariadb-database-create" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.550214 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.552244 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.553301 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nvmxs" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.565114 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l6bzc"] Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.647780 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-combined-ca-bundle\") pod \"barbican-db-sync-l6bzc\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.647842 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7jc4\" (UniqueName: \"kubernetes.io/projected/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-kube-api-access-v7jc4\") pod \"barbican-db-sync-l6bzc\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.647871 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-db-sync-config-data\") pod \"barbican-db-sync-l6bzc\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.750581 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-combined-ca-bundle\") pod \"barbican-db-sync-l6bzc\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.751244 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7jc4\" (UniqueName: \"kubernetes.io/projected/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-kube-api-access-v7jc4\") pod \"barbican-db-sync-l6bzc\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.751348 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-db-sync-config-data\") pod \"barbican-db-sync-l6bzc\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.769895 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-combined-ca-bundle\") pod \"barbican-db-sync-l6bzc\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.774227 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-db-sync-config-data\") pod \"barbican-db-sync-l6bzc\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.779413 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7jc4\" (UniqueName: \"kubernetes.io/projected/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-kube-api-access-v7jc4\") pod \"barbican-db-sync-l6bzc\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:16 crc kubenswrapper[5034]: I0105 23:23:16.869609 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:17 crc kubenswrapper[5034]: I0105 23:23:17.302270 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l6bzc"] Jan 05 23:23:17 crc kubenswrapper[5034]: I0105 23:23:17.369176 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6bzc" event={"ID":"e4d37640-0cda-47d4-8f50-5e6ce519ca8e","Type":"ContainerStarted","Data":"a8a1a31959b7ff58286949fd35090a2bc81831ba1b018a5ed32ee4dd4e071e4d"} Jan 05 23:23:18 crc kubenswrapper[5034]: I0105 23:23:18.378620 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6bzc" event={"ID":"e4d37640-0cda-47d4-8f50-5e6ce519ca8e","Type":"ContainerStarted","Data":"6f159d85178c914b7f3db80ef61c2de61175c338db4d4773a8139adc8e22f91e"} Jan 05 23:23:18 crc kubenswrapper[5034]: I0105 23:23:18.398688 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-l6bzc" podStartSLOduration=2.398659171 podStartE2EDuration="2.398659171s" podCreationTimestamp="2026-01-05 23:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:23:18.394275786 +0000 UTC m=+5490.766275225" watchObservedRunningTime="2026-01-05 23:23:18.398659171 +0000 UTC m=+5490.770658620" Jan 05 23:23:22 crc kubenswrapper[5034]: I0105 23:23:22.408633 5034 generic.go:334] "Generic (PLEG): container finished" podID="e4d37640-0cda-47d4-8f50-5e6ce519ca8e" containerID="6f159d85178c914b7f3db80ef61c2de61175c338db4d4773a8139adc8e22f91e" exitCode=0 Jan 05 23:23:22 crc kubenswrapper[5034]: I0105 23:23:22.408773 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6bzc" event={"ID":"e4d37640-0cda-47d4-8f50-5e6ce519ca8e","Type":"ContainerDied","Data":"6f159d85178c914b7f3db80ef61c2de61175c338db4d4773a8139adc8e22f91e"} Jan 05 23:23:23 crc kubenswrapper[5034]: I0105 23:23:23.710855 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:23 crc kubenswrapper[5034]: I0105 23:23:23.777934 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-db-sync-config-data\") pod \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " Jan 05 23:23:23 crc kubenswrapper[5034]: I0105 23:23:23.777975 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7jc4\" (UniqueName: \"kubernetes.io/projected/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-kube-api-access-v7jc4\") pod \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " Jan 05 23:23:23 crc kubenswrapper[5034]: I0105 23:23:23.778094 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-combined-ca-bundle\") pod \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\" (UID: \"e4d37640-0cda-47d4-8f50-5e6ce519ca8e\") " Jan 05 23:23:23 crc kubenswrapper[5034]: I0105 23:23:23.783607 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-kube-api-access-v7jc4" (OuterVolumeSpecName: "kube-api-access-v7jc4") pod "e4d37640-0cda-47d4-8f50-5e6ce519ca8e" (UID: "e4d37640-0cda-47d4-8f50-5e6ce519ca8e"). InnerVolumeSpecName "kube-api-access-v7jc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:23:23 crc kubenswrapper[5034]: I0105 23:23:23.783660 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e4d37640-0cda-47d4-8f50-5e6ce519ca8e" (UID: "e4d37640-0cda-47d4-8f50-5e6ce519ca8e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:23:23 crc kubenswrapper[5034]: I0105 23:23:23.802175 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4d37640-0cda-47d4-8f50-5e6ce519ca8e" (UID: "e4d37640-0cda-47d4-8f50-5e6ce519ca8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:23:23 crc kubenswrapper[5034]: I0105 23:23:23.883153 5034 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:23 crc kubenswrapper[5034]: I0105 23:23:23.883193 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7jc4\" (UniqueName: \"kubernetes.io/projected/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-kube-api-access-v7jc4\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:23 crc kubenswrapper[5034]: I0105 23:23:23.883206 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d37640-0cda-47d4-8f50-5e6ce519ca8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.425851 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6bzc" event={"ID":"e4d37640-0cda-47d4-8f50-5e6ce519ca8e","Type":"ContainerDied","Data":"a8a1a31959b7ff58286949fd35090a2bc81831ba1b018a5ed32ee4dd4e071e4d"} Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.425913 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6bzc" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.425910 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8a1a31959b7ff58286949fd35090a2bc81831ba1b018a5ed32ee4dd4e071e4d" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.651746 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-86ddb9889-nhjck"] Jan 05 23:23:24 crc kubenswrapper[5034]: E0105 23:23:24.652532 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d37640-0cda-47d4-8f50-5e6ce519ca8e" containerName="barbican-db-sync" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.652648 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d37640-0cda-47d4-8f50-5e6ce519ca8e" containerName="barbican-db-sync" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.652927 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d37640-0cda-47d4-8f50-5e6ce519ca8e" containerName="barbican-db-sync" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.654093 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.658822 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nvmxs" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.660010 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.663380 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.684583 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-747547f55d-5jgl7"] Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.686689 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.695903 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.698659 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0586e65e-8f8f-40c4-be35-2753b10187e3-config-data-custom\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.701282 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0586e65e-8f8f-40c4-be35-2753b10187e3-config-data\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.701449 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-747547f55d-5jgl7"] Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.701727 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0586e65e-8f8f-40c4-be35-2753b10187e3-logs\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.701769 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0586e65e-8f8f-40c4-be35-2753b10187e3-combined-ca-bundle\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.702436 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cd6z\" (UniqueName: \"kubernetes.io/projected/0586e65e-8f8f-40c4-be35-2753b10187e3-kube-api-access-9cd6z\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.772734 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86ddb9889-nhjck"] Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.787423 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d8fd969f7-5b8sv"] Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.789343 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.804551 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-config-data-custom\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.806846 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cd6z\" (UniqueName: \"kubernetes.io/projected/0586e65e-8f8f-40c4-be35-2753b10187e3-kube-api-access-9cd6z\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.806897 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-combined-ca-bundle\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.806940 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0586e65e-8f8f-40c4-be35-2753b10187e3-config-data-custom\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.806973 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lq8t\" (UniqueName: \"kubernetes.io/projected/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-kube-api-access-8lq8t\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.806997 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0586e65e-8f8f-40c4-be35-2753b10187e3-config-data\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.807065 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-config-data\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.807524 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-logs\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.807569 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0586e65e-8f8f-40c4-be35-2753b10187e3-logs\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.807594 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0586e65e-8f8f-40c4-be35-2753b10187e3-combined-ca-bundle\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.815342 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8fd969f7-5b8sv"] Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.818293 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0586e65e-8f8f-40c4-be35-2753b10187e3-logs\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.825704 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0586e65e-8f8f-40c4-be35-2753b10187e3-combined-ca-bundle\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.828596 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0586e65e-8f8f-40c4-be35-2753b10187e3-config-data\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.829221 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0586e65e-8f8f-40c4-be35-2753b10187e3-config-data-custom\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.850508 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cd6z\" (UniqueName: \"kubernetes.io/projected/0586e65e-8f8f-40c4-be35-2753b10187e3-kube-api-access-9cd6z\") pod \"barbican-worker-86ddb9889-nhjck\" (UID: \"0586e65e-8f8f-40c4-be35-2753b10187e3\") " pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.870790 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6748c566c8-2p2dz"] Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.872394 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.891201 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6748c566c8-2p2dz"] Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.892241 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.910726 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-config-data\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.910832 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.910884 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.910944 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-logs\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.910969 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-dns-svc\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.911005 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-config\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.911038 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgcqr\" (UniqueName: \"kubernetes.io/projected/12fcda91-6451-4fd1-a777-6d02f3c82aee-kube-api-access-tgcqr\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.911091 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-config-data-custom\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.911154 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-combined-ca-bundle\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.911363 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lq8t\" (UniqueName: \"kubernetes.io/projected/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-kube-api-access-8lq8t\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.911930 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-logs\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.918320 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-config-data\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.921630 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-config-data-custom\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.927822 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-combined-ca-bundle\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:24 crc kubenswrapper[5034]: I0105 23:23:24.936025 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lq8t\" (UniqueName: \"kubernetes.io/projected/ceaa20cf-58aa-4c8b-ae30-96e31512e48a-kube-api-access-8lq8t\") pod \"barbican-keystone-listener-747547f55d-5jgl7\" (UID: \"ceaa20cf-58aa-4c8b-ae30-96e31512e48a\") " pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.001458 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86ddb9889-nhjck" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.013473 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.013546 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4knqb\" (UniqueName: \"kubernetes.io/projected/47d3862b-0b81-43bd-aa08-e26112d99752-kube-api-access-4knqb\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.013590 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-combined-ca-bundle\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.013614 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data-custom\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.013665 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.013702 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.013745 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-dns-svc\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.013777 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-config\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.013806 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgcqr\" (UniqueName: \"kubernetes.io/projected/12fcda91-6451-4fd1-a777-6d02f3c82aee-kube-api-access-tgcqr\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.013838 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d3862b-0b81-43bd-aa08-e26112d99752-logs\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.020205 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.022291 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.022856 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-dns-svc\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.023436 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-config\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.025514 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.038272 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgcqr\" (UniqueName: \"kubernetes.io/projected/12fcda91-6451-4fd1-a777-6d02f3c82aee-kube-api-access-tgcqr\") pod \"dnsmasq-dns-5d8fd969f7-5b8sv\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.117314 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d3862b-0b81-43bd-aa08-e26112d99752-logs\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.117408 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.117438 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4knqb\" (UniqueName: \"kubernetes.io/projected/47d3862b-0b81-43bd-aa08-e26112d99752-kube-api-access-4knqb\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.117483 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-combined-ca-bundle\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.117507 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data-custom\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.118389 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d3862b-0b81-43bd-aa08-e26112d99752-logs\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.123227 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.127590 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-combined-ca-bundle\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.127848 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data-custom\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.138887 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4knqb\" (UniqueName: \"kubernetes.io/projected/47d3862b-0b81-43bd-aa08-e26112d99752-kube-api-access-4knqb\") pod \"barbican-api-6748c566c8-2p2dz\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.315367 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.335288 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.570837 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86ddb9889-nhjck"] Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.671783 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-747547f55d-5jgl7"] Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.900535 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8fd969f7-5b8sv"] Jan 05 23:23:25 crc kubenswrapper[5034]: I0105 23:23:25.968622 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6748c566c8-2p2dz"] Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.458453 5034 generic.go:334] "Generic (PLEG): container finished" podID="12fcda91-6451-4fd1-a777-6d02f3c82aee" containerID="0604a3083f6499434cb8d51d50f8a1052fc51afa84ed1734662c980e94b3e4d0" exitCode=0 Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.458500 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" event={"ID":"12fcda91-6451-4fd1-a777-6d02f3c82aee","Type":"ContainerDied","Data":"0604a3083f6499434cb8d51d50f8a1052fc51afa84ed1734662c980e94b3e4d0"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.458835 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" event={"ID":"12fcda91-6451-4fd1-a777-6d02f3c82aee","Type":"ContainerStarted","Data":"4816d32655c9b071d8b127ef5953b19d1edecccc9796f0abd36472bef6b04a00"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.462477 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6748c566c8-2p2dz" event={"ID":"47d3862b-0b81-43bd-aa08-e26112d99752","Type":"ContainerStarted","Data":"445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.462528 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6748c566c8-2p2dz" event={"ID":"47d3862b-0b81-43bd-aa08-e26112d99752","Type":"ContainerStarted","Data":"716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.462562 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6748c566c8-2p2dz" event={"ID":"47d3862b-0b81-43bd-aa08-e26112d99752","Type":"ContainerStarted","Data":"076897b1b96cce9e40a51e366773166c10552f0ad2f5e3b47248e35ea6b6735c"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.467744 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" event={"ID":"ceaa20cf-58aa-4c8b-ae30-96e31512e48a","Type":"ContainerStarted","Data":"06859d3f19d2f7c611e3bcbfed239f110fcd141ff48b445b3e5dd902aedcb15b"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.467811 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" event={"ID":"ceaa20cf-58aa-4c8b-ae30-96e31512e48a","Type":"ContainerStarted","Data":"e05dfc28dae58daca746540b1643872d56a424b51cca2eb6f88d5448c14aeca6"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.467826 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" event={"ID":"ceaa20cf-58aa-4c8b-ae30-96e31512e48a","Type":"ContainerStarted","Data":"15145126fb097995d967a61e8e42e27af4fbb24f12b8ddface4c54acdfbd57a2"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.481562 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86ddb9889-nhjck" event={"ID":"0586e65e-8f8f-40c4-be35-2753b10187e3","Type":"ContainerStarted","Data":"a3c6e528911ebbeceabe40054038728d8e058445b031fad6d5621c755902bc00"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.481627 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86ddb9889-nhjck" event={"ID":"0586e65e-8f8f-40c4-be35-2753b10187e3","Type":"ContainerStarted","Data":"2ab8d1a36ccc82e0abaefbd1ee3883605132f8c56a0a124054fec3b5cffdd784"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.481639 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86ddb9889-nhjck" event={"ID":"0586e65e-8f8f-40c4-be35-2753b10187e3","Type":"ContainerStarted","Data":"7716337145b212fe7af8db5a9c31bb882746109dc60cdcdf40a1b84dc886bb43"} Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.583882 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-86ddb9889-nhjck" podStartSLOduration=2.58379514 podStartE2EDuration="2.58379514s" podCreationTimestamp="2026-01-05 23:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:23:26.555258601 +0000 UTC m=+5498.927258040" watchObservedRunningTime="2026-01-05 23:23:26.58379514 +0000 UTC m=+5498.955794589" Jan 05 23:23:26 crc kubenswrapper[5034]: I0105 23:23:26.607985 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-747547f55d-5jgl7" podStartSLOduration=2.607959936 podStartE2EDuration="2.607959936s" podCreationTimestamp="2026-01-05 23:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:23:26.601690988 +0000 UTC m=+5498.973690427" watchObservedRunningTime="2026-01-05 23:23:26.607959936 +0000 UTC m=+5498.979959375" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.489677 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76d8878c56-6h7md"] Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.492973 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" event={"ID":"12fcda91-6451-4fd1-a777-6d02f3c82aee","Type":"ContainerStarted","Data":"c982a86da6dec8cb9e305b3b8c45177f1297c47ff259f45934250ba5e25e51dd"} Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.493221 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.494179 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.494638 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.495071 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.496856 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.503533 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.559889 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76d8878c56-6h7md"] Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.592738 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-combined-ca-bundle\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.592804 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dw85\" (UniqueName: \"kubernetes.io/projected/b405fa3f-251f-4a35-a756-ec7791a18148-kube-api-access-4dw85\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.592860 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-config-data-custom\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.592994 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-internal-tls-certs\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.593138 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b405fa3f-251f-4a35-a756-ec7791a18148-logs\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.593193 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-public-tls-certs\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.593254 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-config-data\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.594064 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" podStartSLOduration=3.594023909 podStartE2EDuration="3.594023909s" podCreationTimestamp="2026-01-05 23:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:23:27.559615393 +0000 UTC m=+5499.931614832" watchObservedRunningTime="2026-01-05 23:23:27.594023909 +0000 UTC m=+5499.966023368" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.624500 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6748c566c8-2p2dz" podStartSLOduration=3.624472064 podStartE2EDuration="3.624472064s" podCreationTimestamp="2026-01-05 23:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:23:27.588254706 +0000 UTC m=+5499.960254145" watchObservedRunningTime="2026-01-05 23:23:27.624472064 +0000 UTC m=+5499.996471503" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.694842 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-config-data\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.694927 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-combined-ca-bundle\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.694979 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dw85\" (UniqueName: \"kubernetes.io/projected/b405fa3f-251f-4a35-a756-ec7791a18148-kube-api-access-4dw85\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.695013 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-config-data-custom\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.695061 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-internal-tls-certs\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.695138 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b405fa3f-251f-4a35-a756-ec7791a18148-logs\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.695171 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-public-tls-certs\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.696989 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b405fa3f-251f-4a35-a756-ec7791a18148-logs\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.701181 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-config-data-custom\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.701432 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-public-tls-certs\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.701850 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-internal-tls-certs\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.704723 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-config-data\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.714315 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dw85\" (UniqueName: \"kubernetes.io/projected/b405fa3f-251f-4a35-a756-ec7791a18148-kube-api-access-4dw85\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.714598 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b405fa3f-251f-4a35-a756-ec7791a18148-combined-ca-bundle\") pod \"barbican-api-76d8878c56-6h7md\" (UID: \"b405fa3f-251f-4a35-a756-ec7791a18148\") " pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:27 crc kubenswrapper[5034]: I0105 23:23:27.828251 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:28 crc kubenswrapper[5034]: I0105 23:23:28.335845 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76d8878c56-6h7md"] Jan 05 23:23:28 crc kubenswrapper[5034]: I0105 23:23:28.508021 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76d8878c56-6h7md" event={"ID":"b405fa3f-251f-4a35-a756-ec7791a18148","Type":"ContainerStarted","Data":"42511db80eb74f25894bd78eb218decc9033f411b2d3e83668f4ab8970d04b3e"} Jan 05 23:23:29 crc kubenswrapper[5034]: I0105 23:23:29.517973 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76d8878c56-6h7md" event={"ID":"b405fa3f-251f-4a35-a756-ec7791a18148","Type":"ContainerStarted","Data":"eb52444307562d9f2db7cd47be39177685530c9df2c08935bf27dd1e9dd45e72"} Jan 05 23:23:29 crc kubenswrapper[5034]: I0105 23:23:29.518325 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76d8878c56-6h7md" event={"ID":"b405fa3f-251f-4a35-a756-ec7791a18148","Type":"ContainerStarted","Data":"7a7e6f7074f382e3ca925385077c3d7274d3ba5130fe9c4ec5509d8f8fc25896"} Jan 05 23:23:29 crc kubenswrapper[5034]: I0105 23:23:29.540291 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76d8878c56-6h7md" podStartSLOduration=2.5402716229999998 podStartE2EDuration="2.540271623s" podCreationTimestamp="2026-01-05 23:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:23:29.536661271 +0000 UTC m=+5501.908660710" watchObservedRunningTime="2026-01-05 23:23:29.540271623 +0000 UTC m=+5501.912271062" Jan 05 23:23:30 crc kubenswrapper[5034]: I0105 23:23:30.525808 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:30 crc kubenswrapper[5034]: I0105 23:23:30.525870 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.317236 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.386814 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f46ff4fd9-ksdjt"] Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.387177 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" podUID="a60218b9-937a-431f-8849-babc2ca5e2c3" containerName="dnsmasq-dns" containerID="cri-o://1a0310e236a951b75eb049f6f13175d8ff0fc02d9fe78d90ebb144438922e2ff" gracePeriod=10 Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.579401 5034 generic.go:334] "Generic (PLEG): container finished" podID="a60218b9-937a-431f-8849-babc2ca5e2c3" containerID="1a0310e236a951b75eb049f6f13175d8ff0fc02d9fe78d90ebb144438922e2ff" exitCode=0 Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.579491 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" event={"ID":"a60218b9-937a-431f-8849-babc2ca5e2c3","Type":"ContainerDied","Data":"1a0310e236a951b75eb049f6f13175d8ff0fc02d9fe78d90ebb144438922e2ff"} Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.905392 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.944851 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-sb\") pod \"a60218b9-937a-431f-8849-babc2ca5e2c3\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.944972 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbchw\" (UniqueName: \"kubernetes.io/projected/a60218b9-937a-431f-8849-babc2ca5e2c3-kube-api-access-lbchw\") pod \"a60218b9-937a-431f-8849-babc2ca5e2c3\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.945289 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-nb\") pod \"a60218b9-937a-431f-8849-babc2ca5e2c3\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.945316 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-dns-svc\") pod \"a60218b9-937a-431f-8849-babc2ca5e2c3\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.945384 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-config\") pod \"a60218b9-937a-431f-8849-babc2ca5e2c3\" (UID: \"a60218b9-937a-431f-8849-babc2ca5e2c3\") " Jan 05 23:23:35 crc kubenswrapper[5034]: I0105 23:23:35.955525 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60218b9-937a-431f-8849-babc2ca5e2c3-kube-api-access-lbchw" (OuterVolumeSpecName: "kube-api-access-lbchw") pod "a60218b9-937a-431f-8849-babc2ca5e2c3" (UID: "a60218b9-937a-431f-8849-babc2ca5e2c3"). InnerVolumeSpecName "kube-api-access-lbchw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:35.999996 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a60218b9-937a-431f-8849-babc2ca5e2c3" (UID: "a60218b9-937a-431f-8849-babc2ca5e2c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.001725 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a60218b9-937a-431f-8849-babc2ca5e2c3" (UID: "a60218b9-937a-431f-8849-babc2ca5e2c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.014829 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a60218b9-937a-431f-8849-babc2ca5e2c3" (UID: "a60218b9-937a-431f-8849-babc2ca5e2c3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.021442 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-config" (OuterVolumeSpecName: "config") pod "a60218b9-937a-431f-8849-babc2ca5e2c3" (UID: "a60218b9-937a-431f-8849-babc2ca5e2c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.048299 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.048330 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.048340 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.048348 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60218b9-937a-431f-8849-babc2ca5e2c3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.048357 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbchw\" (UniqueName: \"kubernetes.io/projected/a60218b9-937a-431f-8849-babc2ca5e2c3-kube-api-access-lbchw\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.591049 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" event={"ID":"a60218b9-937a-431f-8849-babc2ca5e2c3","Type":"ContainerDied","Data":"71859ded9127eee07cfbcd311cc59d12a089afc98038ba5863d716cff9dc6a1e"} Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.591159 5034 scope.go:117] "RemoveContainer" containerID="1a0310e236a951b75eb049f6f13175d8ff0fc02d9fe78d90ebb144438922e2ff" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.591312 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f46ff4fd9-ksdjt" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.644062 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f46ff4fd9-ksdjt"] Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.655879 5034 scope.go:117] "RemoveContainer" containerID="9a5a81c2bb56856daeae935d22f02489c648e4c026af84a4c34531a94ad96eea" Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.666666 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f46ff4fd9-ksdjt"] Jan 05 23:23:36 crc kubenswrapper[5034]: I0105 23:23:36.933758 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:37 crc kubenswrapper[5034]: I0105 23:23:37.012104 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:37 crc kubenswrapper[5034]: I0105 23:23:37.850714 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a60218b9-937a-431f-8849-babc2ca5e2c3" path="/var/lib/kubelet/pods/a60218b9-937a-431f-8849-babc2ca5e2c3/volumes" Jan 05 23:23:39 crc kubenswrapper[5034]: I0105 23:23:39.364643 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:39 crc kubenswrapper[5034]: I0105 23:23:39.376487 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76d8878c56-6h7md" Jan 05 23:23:39 crc kubenswrapper[5034]: I0105 23:23:39.467560 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6748c566c8-2p2dz"] Jan 05 23:23:39 crc kubenswrapper[5034]: I0105 23:23:39.467811 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6748c566c8-2p2dz" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" containerName="barbican-api-log" containerID="cri-o://716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b" gracePeriod=30 Jan 05 23:23:39 crc kubenswrapper[5034]: I0105 23:23:39.468297 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6748c566c8-2p2dz" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" containerName="barbican-api" containerID="cri-o://445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02" gracePeriod=30 Jan 05 23:23:39 crc kubenswrapper[5034]: I0105 23:23:39.636752 5034 generic.go:334] "Generic (PLEG): container finished" podID="47d3862b-0b81-43bd-aa08-e26112d99752" containerID="716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b" exitCode=143 Jan 05 23:23:39 crc kubenswrapper[5034]: I0105 23:23:39.638092 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6748c566c8-2p2dz" event={"ID":"47d3862b-0b81-43bd-aa08-e26112d99752","Type":"ContainerDied","Data":"716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b"} Jan 05 23:23:42 crc kubenswrapper[5034]: I0105 23:23:42.618218 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6748c566c8-2p2dz" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.33:9311/healthcheck\": read tcp 10.217.0.2:41308->10.217.1.33:9311: read: connection reset by peer" Jan 05 23:23:42 crc kubenswrapper[5034]: I0105 23:23:42.618276 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6748c566c8-2p2dz" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.33:9311/healthcheck\": read tcp 10.217.0.2:41318->10.217.1.33:9311: read: connection reset by peer" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.020566 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.212121 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4knqb\" (UniqueName: \"kubernetes.io/projected/47d3862b-0b81-43bd-aa08-e26112d99752-kube-api-access-4knqb\") pod \"47d3862b-0b81-43bd-aa08-e26112d99752\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.212225 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-combined-ca-bundle\") pod \"47d3862b-0b81-43bd-aa08-e26112d99752\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.212267 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data\") pod \"47d3862b-0b81-43bd-aa08-e26112d99752\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.212321 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data-custom\") pod \"47d3862b-0b81-43bd-aa08-e26112d99752\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.212362 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d3862b-0b81-43bd-aa08-e26112d99752-logs\") pod \"47d3862b-0b81-43bd-aa08-e26112d99752\" (UID: \"47d3862b-0b81-43bd-aa08-e26112d99752\") " Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.213384 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47d3862b-0b81-43bd-aa08-e26112d99752-logs" (OuterVolumeSpecName: "logs") pod "47d3862b-0b81-43bd-aa08-e26112d99752" (UID: "47d3862b-0b81-43bd-aa08-e26112d99752"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.218286 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d3862b-0b81-43bd-aa08-e26112d99752-kube-api-access-4knqb" (OuterVolumeSpecName: "kube-api-access-4knqb") pod "47d3862b-0b81-43bd-aa08-e26112d99752" (UID: "47d3862b-0b81-43bd-aa08-e26112d99752"). InnerVolumeSpecName "kube-api-access-4knqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.218553 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "47d3862b-0b81-43bd-aa08-e26112d99752" (UID: "47d3862b-0b81-43bd-aa08-e26112d99752"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.240748 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47d3862b-0b81-43bd-aa08-e26112d99752" (UID: "47d3862b-0b81-43bd-aa08-e26112d99752"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.307357 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data" (OuterVolumeSpecName: "config-data") pod "47d3862b-0b81-43bd-aa08-e26112d99752" (UID: "47d3862b-0b81-43bd-aa08-e26112d99752"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.328402 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4knqb\" (UniqueName: \"kubernetes.io/projected/47d3862b-0b81-43bd-aa08-e26112d99752-kube-api-access-4knqb\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.328450 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.328463 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.328472 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47d3862b-0b81-43bd-aa08-e26112d99752-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.328482 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d3862b-0b81-43bd-aa08-e26112d99752-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.678958 5034 generic.go:334] "Generic (PLEG): container finished" podID="47d3862b-0b81-43bd-aa08-e26112d99752" containerID="445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02" exitCode=0 Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.679062 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6748c566c8-2p2dz" event={"ID":"47d3862b-0b81-43bd-aa08-e26112d99752","Type":"ContainerDied","Data":"445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02"} Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.679403 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6748c566c8-2p2dz" event={"ID":"47d3862b-0b81-43bd-aa08-e26112d99752","Type":"ContainerDied","Data":"076897b1b96cce9e40a51e366773166c10552f0ad2f5e3b47248e35ea6b6735c"} Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.679478 5034 scope.go:117] "RemoveContainer" containerID="445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.679798 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6748c566c8-2p2dz" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.746503 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6748c566c8-2p2dz"] Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.748474 5034 scope.go:117] "RemoveContainer" containerID="716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.752938 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6748c566c8-2p2dz"] Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.774395 5034 scope.go:117] "RemoveContainer" containerID="445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02" Jan 05 23:23:43 crc kubenswrapper[5034]: E0105 23:23:43.774844 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02\": container with ID starting with 445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02 not found: ID does not exist" containerID="445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.774895 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02"} err="failed to get container status \"445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02\": rpc error: code = NotFound desc = could not find container \"445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02\": container with ID starting with 445dcf5b86598d0465d98cd6e57acea2855c817552ad8b326e55b412088eed02 not found: ID does not exist" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.774927 5034 scope.go:117] "RemoveContainer" containerID="716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b" Jan 05 23:23:43 crc kubenswrapper[5034]: E0105 23:23:43.775354 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b\": container with ID starting with 716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b not found: ID does not exist" containerID="716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.775380 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b"} err="failed to get container status \"716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b\": rpc error: code = NotFound desc = could not find container \"716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b\": container with ID starting with 716718a03fd46236b2e4be4339dea6faa7fb499196c6d43bf0c5788fb3e0765b not found: ID does not exist" Jan 05 23:23:43 crc kubenswrapper[5034]: I0105 23:23:43.850708 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" path="/var/lib/kubelet/pods/47d3862b-0b81-43bd-aa08-e26112d99752/volumes" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.820836 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-w28jt"] Jan 05 23:23:46 crc kubenswrapper[5034]: E0105 23:23:46.821709 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" containerName="barbican-api-log" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.821727 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" containerName="barbican-api-log" Jan 05 23:23:46 crc kubenswrapper[5034]: E0105 23:23:46.821777 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" containerName="barbican-api" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.821789 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" containerName="barbican-api" Jan 05 23:23:46 crc kubenswrapper[5034]: E0105 23:23:46.821808 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60218b9-937a-431f-8849-babc2ca5e2c3" containerName="init" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.821817 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60218b9-937a-431f-8849-babc2ca5e2c3" containerName="init" Jan 05 23:23:46 crc kubenswrapper[5034]: E0105 23:23:46.821832 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60218b9-937a-431f-8849-babc2ca5e2c3" containerName="dnsmasq-dns" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.821839 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60218b9-937a-431f-8849-babc2ca5e2c3" containerName="dnsmasq-dns" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.822022 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" containerName="barbican-api-log" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.822037 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60218b9-937a-431f-8849-babc2ca5e2c3" containerName="dnsmasq-dns" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.822056 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d3862b-0b81-43bd-aa08-e26112d99752" containerName="barbican-api" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.822834 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w28jt" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.830558 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w28jt"] Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.894504 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm29z\" (UniqueName: \"kubernetes.io/projected/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-kube-api-access-qm29z\") pod \"neutron-db-create-w28jt\" (UID: \"440a3282-7a5d-4fcd-af1d-89d6129d0cdb\") " pod="openstack/neutron-db-create-w28jt" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.894708 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-operator-scripts\") pod \"neutron-db-create-w28jt\" (UID: \"440a3282-7a5d-4fcd-af1d-89d6129d0cdb\") " pod="openstack/neutron-db-create-w28jt" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.919098 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-909f-account-create-update-vzhr2"] Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.921036 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-909f-account-create-update-vzhr2" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.923919 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.935186 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-909f-account-create-update-vzhr2"] Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.996752 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzx5r\" (UniqueName: \"kubernetes.io/projected/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-kube-api-access-gzx5r\") pod \"neutron-909f-account-create-update-vzhr2\" (UID: \"76ed38be-ed9b-4d7d-91d1-c5435d7621eb\") " pod="openstack/neutron-909f-account-create-update-vzhr2" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.996852 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm29z\" (UniqueName: \"kubernetes.io/projected/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-kube-api-access-qm29z\") pod \"neutron-db-create-w28jt\" (UID: \"440a3282-7a5d-4fcd-af1d-89d6129d0cdb\") " pod="openstack/neutron-db-create-w28jt" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.997316 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-operator-scripts\") pod \"neutron-db-create-w28jt\" (UID: \"440a3282-7a5d-4fcd-af1d-89d6129d0cdb\") " pod="openstack/neutron-db-create-w28jt" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.997346 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-operator-scripts\") pod \"neutron-909f-account-create-update-vzhr2\" (UID: \"76ed38be-ed9b-4d7d-91d1-c5435d7621eb\") " pod="openstack/neutron-909f-account-create-update-vzhr2" Jan 05 23:23:46 crc kubenswrapper[5034]: I0105 23:23:46.998201 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-operator-scripts\") pod \"neutron-db-create-w28jt\" (UID: \"440a3282-7a5d-4fcd-af1d-89d6129d0cdb\") " pod="openstack/neutron-db-create-w28jt" Jan 05 23:23:47 crc kubenswrapper[5034]: I0105 23:23:47.020667 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm29z\" (UniqueName: \"kubernetes.io/projected/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-kube-api-access-qm29z\") pod \"neutron-db-create-w28jt\" (UID: \"440a3282-7a5d-4fcd-af1d-89d6129d0cdb\") " pod="openstack/neutron-db-create-w28jt" Jan 05 23:23:47 crc kubenswrapper[5034]: I0105 23:23:47.100119 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzx5r\" (UniqueName: \"kubernetes.io/projected/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-kube-api-access-gzx5r\") pod \"neutron-909f-account-create-update-vzhr2\" (UID: \"76ed38be-ed9b-4d7d-91d1-c5435d7621eb\") " pod="openstack/neutron-909f-account-create-update-vzhr2" Jan 05 23:23:47 crc kubenswrapper[5034]: I0105 23:23:47.100591 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-operator-scripts\") pod \"neutron-909f-account-create-update-vzhr2\" (UID: \"76ed38be-ed9b-4d7d-91d1-c5435d7621eb\") " pod="openstack/neutron-909f-account-create-update-vzhr2" Jan 05 23:23:47 crc kubenswrapper[5034]: I0105 23:23:47.101366 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-operator-scripts\") pod \"neutron-909f-account-create-update-vzhr2\" (UID: \"76ed38be-ed9b-4d7d-91d1-c5435d7621eb\") " pod="openstack/neutron-909f-account-create-update-vzhr2" Jan 05 23:23:47 crc kubenswrapper[5034]: I0105 23:23:47.123126 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzx5r\" (UniqueName: \"kubernetes.io/projected/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-kube-api-access-gzx5r\") pod \"neutron-909f-account-create-update-vzhr2\" (UID: \"76ed38be-ed9b-4d7d-91d1-c5435d7621eb\") " pod="openstack/neutron-909f-account-create-update-vzhr2" Jan 05 23:23:47 crc kubenswrapper[5034]: I0105 23:23:47.157426 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w28jt" Jan 05 23:23:47 crc kubenswrapper[5034]: I0105 23:23:47.245990 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-909f-account-create-update-vzhr2" Jan 05 23:23:47 crc kubenswrapper[5034]: I0105 23:23:47.647541 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w28jt"] Jan 05 23:23:47 crc kubenswrapper[5034]: W0105 23:23:47.651907 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod440a3282_7a5d_4fcd_af1d_89d6129d0cdb.slice/crio-6d5d59a3f402f16603014f161ebfde0b533fed0ce0efa00ab3002e44556c9ae3 WatchSource:0}: Error finding container 6d5d59a3f402f16603014f161ebfde0b533fed0ce0efa00ab3002e44556c9ae3: Status 404 returned error can't find the container with id 6d5d59a3f402f16603014f161ebfde0b533fed0ce0efa00ab3002e44556c9ae3 Jan 05 23:23:47 crc kubenswrapper[5034]: I0105 23:23:47.712917 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w28jt" event={"ID":"440a3282-7a5d-4fcd-af1d-89d6129d0cdb","Type":"ContainerStarted","Data":"6d5d59a3f402f16603014f161ebfde0b533fed0ce0efa00ab3002e44556c9ae3"} Jan 05 23:23:47 crc kubenswrapper[5034]: W0105 23:23:47.816961 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76ed38be_ed9b_4d7d_91d1_c5435d7621eb.slice/crio-6c00883061dcd611fe1c2b56c3577d3057091ccf166bd32b7f790ab5a381e995 WatchSource:0}: Error finding container 6c00883061dcd611fe1c2b56c3577d3057091ccf166bd32b7f790ab5a381e995: Status 404 returned error can't find the container with id 6c00883061dcd611fe1c2b56c3577d3057091ccf166bd32b7f790ab5a381e995 Jan 05 23:23:47 crc kubenswrapper[5034]: I0105 23:23:47.817926 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-909f-account-create-update-vzhr2"] Jan 05 23:23:48 crc kubenswrapper[5034]: I0105 23:23:48.729829 5034 generic.go:334] "Generic (PLEG): container finished" podID="76ed38be-ed9b-4d7d-91d1-c5435d7621eb" containerID="77bbee72d22f20d5c6624da7132b7d2b3af3d3fbd5a7de19dd7f29f910ee3ef1" exitCode=0 Jan 05 23:23:48 crc kubenswrapper[5034]: I0105 23:23:48.730359 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-909f-account-create-update-vzhr2" event={"ID":"76ed38be-ed9b-4d7d-91d1-c5435d7621eb","Type":"ContainerDied","Data":"77bbee72d22f20d5c6624da7132b7d2b3af3d3fbd5a7de19dd7f29f910ee3ef1"} Jan 05 23:23:48 crc kubenswrapper[5034]: I0105 23:23:48.730409 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-909f-account-create-update-vzhr2" event={"ID":"76ed38be-ed9b-4d7d-91d1-c5435d7621eb","Type":"ContainerStarted","Data":"6c00883061dcd611fe1c2b56c3577d3057091ccf166bd32b7f790ab5a381e995"} Jan 05 23:23:48 crc kubenswrapper[5034]: I0105 23:23:48.734433 5034 generic.go:334] "Generic (PLEG): container finished" podID="440a3282-7a5d-4fcd-af1d-89d6129d0cdb" containerID="a9d9bf337abb9c9fe4c32b360dec69ca24df71a9ab68a93143913aefadcab6aa" exitCode=0 Jan 05 23:23:48 crc kubenswrapper[5034]: I0105 23:23:48.734501 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w28jt" event={"ID":"440a3282-7a5d-4fcd-af1d-89d6129d0cdb","Type":"ContainerDied","Data":"a9d9bf337abb9c9fe4c32b360dec69ca24df71a9ab68a93143913aefadcab6aa"} Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.101300 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w28jt" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.107309 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-909f-account-create-update-vzhr2" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.176532 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-operator-scripts\") pod \"76ed38be-ed9b-4d7d-91d1-c5435d7621eb\" (UID: \"76ed38be-ed9b-4d7d-91d1-c5435d7621eb\") " Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.176610 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzx5r\" (UniqueName: \"kubernetes.io/projected/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-kube-api-access-gzx5r\") pod \"76ed38be-ed9b-4d7d-91d1-c5435d7621eb\" (UID: \"76ed38be-ed9b-4d7d-91d1-c5435d7621eb\") " Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.176675 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm29z\" (UniqueName: \"kubernetes.io/projected/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-kube-api-access-qm29z\") pod \"440a3282-7a5d-4fcd-af1d-89d6129d0cdb\" (UID: \"440a3282-7a5d-4fcd-af1d-89d6129d0cdb\") " Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.176901 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-operator-scripts\") pod \"440a3282-7a5d-4fcd-af1d-89d6129d0cdb\" (UID: \"440a3282-7a5d-4fcd-af1d-89d6129d0cdb\") " Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.177603 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "440a3282-7a5d-4fcd-af1d-89d6129d0cdb" (UID: "440a3282-7a5d-4fcd-af1d-89d6129d0cdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.177829 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76ed38be-ed9b-4d7d-91d1-c5435d7621eb" (UID: "76ed38be-ed9b-4d7d-91d1-c5435d7621eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.178469 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.178492 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.182257 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-kube-api-access-gzx5r" (OuterVolumeSpecName: "kube-api-access-gzx5r") pod "76ed38be-ed9b-4d7d-91d1-c5435d7621eb" (UID: "76ed38be-ed9b-4d7d-91d1-c5435d7621eb"). InnerVolumeSpecName "kube-api-access-gzx5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.182319 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-kube-api-access-qm29z" (OuterVolumeSpecName: "kube-api-access-qm29z") pod "440a3282-7a5d-4fcd-af1d-89d6129d0cdb" (UID: "440a3282-7a5d-4fcd-af1d-89d6129d0cdb"). InnerVolumeSpecName "kube-api-access-qm29z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.280684 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzx5r\" (UniqueName: \"kubernetes.io/projected/76ed38be-ed9b-4d7d-91d1-c5435d7621eb-kube-api-access-gzx5r\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.280717 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm29z\" (UniqueName: \"kubernetes.io/projected/440a3282-7a5d-4fcd-af1d-89d6129d0cdb-kube-api-access-qm29z\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.751423 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-909f-account-create-update-vzhr2" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.755429 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-909f-account-create-update-vzhr2" event={"ID":"76ed38be-ed9b-4d7d-91d1-c5435d7621eb","Type":"ContainerDied","Data":"6c00883061dcd611fe1c2b56c3577d3057091ccf166bd32b7f790ab5a381e995"} Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.755521 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c00883061dcd611fe1c2b56c3577d3057091ccf166bd32b7f790ab5a381e995" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.760056 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w28jt" event={"ID":"440a3282-7a5d-4fcd-af1d-89d6129d0cdb","Type":"ContainerDied","Data":"6d5d59a3f402f16603014f161ebfde0b533fed0ce0efa00ab3002e44556c9ae3"} Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.760140 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d5d59a3f402f16603014f161ebfde0b533fed0ce0efa00ab3002e44556c9ae3" Jan 05 23:23:50 crc kubenswrapper[5034]: I0105 23:23:50.761390 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w28jt" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.161352 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hth4l"] Jan 05 23:23:52 crc kubenswrapper[5034]: E0105 23:23:52.162151 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ed38be-ed9b-4d7d-91d1-c5435d7621eb" containerName="mariadb-account-create-update" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.162168 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ed38be-ed9b-4d7d-91d1-c5435d7621eb" containerName="mariadb-account-create-update" Jan 05 23:23:52 crc kubenswrapper[5034]: E0105 23:23:52.162187 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440a3282-7a5d-4fcd-af1d-89d6129d0cdb" containerName="mariadb-database-create" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.162193 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="440a3282-7a5d-4fcd-af1d-89d6129d0cdb" containerName="mariadb-database-create" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.162372 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="440a3282-7a5d-4fcd-af1d-89d6129d0cdb" containerName="mariadb-database-create" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.162396 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ed38be-ed9b-4d7d-91d1-c5435d7621eb" containerName="mariadb-account-create-update" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.163009 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.166859 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.167324 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-297fc" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.167572 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.174071 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hth4l"] Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.217384 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-combined-ca-bundle\") pod \"neutron-db-sync-hth4l\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.217574 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-config\") pod \"neutron-db-sync-hth4l\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.218188 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qccn\" (UniqueName: \"kubernetes.io/projected/593f1cee-9783-4841-a32b-1335a0c115fd-kube-api-access-6qccn\") pod \"neutron-db-sync-hth4l\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.320594 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qccn\" (UniqueName: \"kubernetes.io/projected/593f1cee-9783-4841-a32b-1335a0c115fd-kube-api-access-6qccn\") pod \"neutron-db-sync-hth4l\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.320672 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-combined-ca-bundle\") pod \"neutron-db-sync-hth4l\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.320709 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-config\") pod \"neutron-db-sync-hth4l\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.329049 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-config\") pod \"neutron-db-sync-hth4l\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.330727 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-combined-ca-bundle\") pod \"neutron-db-sync-hth4l\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.339889 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qccn\" (UniqueName: \"kubernetes.io/projected/593f1cee-9783-4841-a32b-1335a0c115fd-kube-api-access-6qccn\") pod \"neutron-db-sync-hth4l\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.486387 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:52 crc kubenswrapper[5034]: I0105 23:23:52.945550 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hth4l"] Jan 05 23:23:53 crc kubenswrapper[5034]: I0105 23:23:53.788830 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hth4l" event={"ID":"593f1cee-9783-4841-a32b-1335a0c115fd","Type":"ContainerStarted","Data":"28023b5ae9f5ff89b6d4e0e64195584e0174132239ed53308e4d703ab907a48f"} Jan 05 23:23:53 crc kubenswrapper[5034]: I0105 23:23:53.789207 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hth4l" event={"ID":"593f1cee-9783-4841-a32b-1335a0c115fd","Type":"ContainerStarted","Data":"f83fdf1622ced543a701336b1054ac37671941e11ed7a6cf1bafbcb368fc4589"} Jan 05 23:23:53 crc kubenswrapper[5034]: I0105 23:23:53.811363 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hth4l" podStartSLOduration=1.811338012 podStartE2EDuration="1.811338012s" podCreationTimestamp="2026-01-05 23:23:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:23:53.802766149 +0000 UTC m=+5526.174765588" watchObservedRunningTime="2026-01-05 23:23:53.811338012 +0000 UTC m=+5526.183337451" Jan 05 23:23:57 crc kubenswrapper[5034]: I0105 23:23:57.836504 5034 generic.go:334] "Generic (PLEG): container finished" podID="593f1cee-9783-4841-a32b-1335a0c115fd" containerID="28023b5ae9f5ff89b6d4e0e64195584e0174132239ed53308e4d703ab907a48f" exitCode=0 Jan 05 23:23:57 crc kubenswrapper[5034]: I0105 23:23:57.836604 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hth4l" event={"ID":"593f1cee-9783-4841-a32b-1335a0c115fd","Type":"ContainerDied","Data":"28023b5ae9f5ff89b6d4e0e64195584e0174132239ed53308e4d703ab907a48f"} Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.179639 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hth4l" Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.273551 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qccn\" (UniqueName: \"kubernetes.io/projected/593f1cee-9783-4841-a32b-1335a0c115fd-kube-api-access-6qccn\") pod \"593f1cee-9783-4841-a32b-1335a0c115fd\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.273707 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-config\") pod \"593f1cee-9783-4841-a32b-1335a0c115fd\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.273842 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-combined-ca-bundle\") pod \"593f1cee-9783-4841-a32b-1335a0c115fd\" (UID: \"593f1cee-9783-4841-a32b-1335a0c115fd\") " Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.288677 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593f1cee-9783-4841-a32b-1335a0c115fd-kube-api-access-6qccn" (OuterVolumeSpecName: "kube-api-access-6qccn") pod "593f1cee-9783-4841-a32b-1335a0c115fd" (UID: "593f1cee-9783-4841-a32b-1335a0c115fd"). InnerVolumeSpecName "kube-api-access-6qccn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.300191 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "593f1cee-9783-4841-a32b-1335a0c115fd" (UID: "593f1cee-9783-4841-a32b-1335a0c115fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.322618 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-config" (OuterVolumeSpecName: "config") pod "593f1cee-9783-4841-a32b-1335a0c115fd" (UID: "593f1cee-9783-4841-a32b-1335a0c115fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.380588 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.380628 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f1cee-9783-4841-a32b-1335a0c115fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.380646 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qccn\" (UniqueName: \"kubernetes.io/projected/593f1cee-9783-4841-a32b-1335a0c115fd-kube-api-access-6qccn\") on node \"crc\" DevicePath \"\"" Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.859392 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hth4l" event={"ID":"593f1cee-9783-4841-a32b-1335a0c115fd","Type":"ContainerDied","Data":"f83fdf1622ced543a701336b1054ac37671941e11ed7a6cf1bafbcb368fc4589"} Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.859739 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f83fdf1622ced543a701336b1054ac37671941e11ed7a6cf1bafbcb368fc4589" Jan 05 23:23:59 crc kubenswrapper[5034]: I0105 23:23:59.859466 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hth4l" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.054416 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f894ffd67-2bd46"] Jan 05 23:24:00 crc kubenswrapper[5034]: E0105 23:24:00.054798 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593f1cee-9783-4841-a32b-1335a0c115fd" containerName="neutron-db-sync" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.054817 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="593f1cee-9783-4841-a32b-1335a0c115fd" containerName="neutron-db-sync" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.054965 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="593f1cee-9783-4841-a32b-1335a0c115fd" containerName="neutron-db-sync" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.055946 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.096435 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-dns-svc\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.096508 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npt9t\" (UniqueName: \"kubernetes.io/projected/55f19b5f-9663-427b-a68c-7ff5888fe36e-kube-api-access-npt9t\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.097102 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.097199 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.097487 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-config\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.163986 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f894ffd67-2bd46"] Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.202288 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-config\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.202370 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-dns-svc\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.202401 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npt9t\" (UniqueName: \"kubernetes.io/projected/55f19b5f-9663-427b-a68c-7ff5888fe36e-kube-api-access-npt9t\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.202495 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.202531 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.203589 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.203600 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.204188 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-dns-svc\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.204610 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-config\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.223003 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npt9t\" (UniqueName: \"kubernetes.io/projected/55f19b5f-9663-427b-a68c-7ff5888fe36e-kube-api-access-npt9t\") pod \"dnsmasq-dns-6f894ffd67-2bd46\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.317901 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8fff78d-x568c"] Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.320240 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.323108 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.323679 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.324503 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.324707 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-297fc" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.342436 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8fff78d-x568c"] Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.375705 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.405511 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-httpd-config\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.405614 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-ovndb-tls-certs\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.405646 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-config\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.405688 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dxnk\" (UniqueName: \"kubernetes.io/projected/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-kube-api-access-4dxnk\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.405715 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-combined-ca-bundle\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.510173 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dxnk\" (UniqueName: \"kubernetes.io/projected/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-kube-api-access-4dxnk\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.510234 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-combined-ca-bundle\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.510282 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-httpd-config\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.510402 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-ovndb-tls-certs\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.510434 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-config\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.516016 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-config\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.516506 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-httpd-config\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.521975 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-ovndb-tls-certs\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.536421 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-combined-ca-bundle\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.540213 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dxnk\" (UniqueName: \"kubernetes.io/projected/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-kube-api-access-4dxnk\") pod \"neutron-8fff78d-x568c\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:00 crc kubenswrapper[5034]: I0105 23:24:00.647361 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:01 crc kubenswrapper[5034]: I0105 23:24:01.051330 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f894ffd67-2bd46"] Jan 05 23:24:01 crc kubenswrapper[5034]: I0105 23:24:01.290333 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8fff78d-x568c"] Jan 05 23:24:01 crc kubenswrapper[5034]: I0105 23:24:01.896925 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fff78d-x568c" event={"ID":"fbd527dc-29b7-4120-a2f4-ff99c4ca660f","Type":"ContainerStarted","Data":"0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7"} Jan 05 23:24:01 crc kubenswrapper[5034]: I0105 23:24:01.898192 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fff78d-x568c" event={"ID":"fbd527dc-29b7-4120-a2f4-ff99c4ca660f","Type":"ContainerStarted","Data":"18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874"} Jan 05 23:24:01 crc kubenswrapper[5034]: I0105 23:24:01.898207 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fff78d-x568c" event={"ID":"fbd527dc-29b7-4120-a2f4-ff99c4ca660f","Type":"ContainerStarted","Data":"810692ff152185512f973155610922cfae5e8a1f2404819d0292b2832b859826"} Jan 05 23:24:01 crc kubenswrapper[5034]: I0105 23:24:01.898271 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:01 crc kubenswrapper[5034]: I0105 23:24:01.900203 5034 generic.go:334] "Generic (PLEG): container finished" podID="55f19b5f-9663-427b-a68c-7ff5888fe36e" containerID="8fee1ac5a0ce47609eb57f221cd00e3c3257318a377d3818dbd57489b6cdd372" exitCode=0 Jan 05 23:24:01 crc kubenswrapper[5034]: I0105 23:24:01.900232 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" event={"ID":"55f19b5f-9663-427b-a68c-7ff5888fe36e","Type":"ContainerDied","Data":"8fee1ac5a0ce47609eb57f221cd00e3c3257318a377d3818dbd57489b6cdd372"} Jan 05 23:24:01 crc kubenswrapper[5034]: I0105 23:24:01.900248 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" event={"ID":"55f19b5f-9663-427b-a68c-7ff5888fe36e","Type":"ContainerStarted","Data":"2fe2d54da18002dd293488cb0ac8e4e40f65629096013df5b35bfd52992eff70"} Jan 05 23:24:01 crc kubenswrapper[5034]: I0105 23:24:01.927929 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8fff78d-x568c" podStartSLOduration=1.927909286 podStartE2EDuration="1.927909286s" podCreationTimestamp="2026-01-05 23:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:24:01.915337119 +0000 UTC m=+5534.287336558" watchObservedRunningTime="2026-01-05 23:24:01.927909286 +0000 UTC m=+5534.299908725" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.701889 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5797d7d97c-twwkv"] Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.703592 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.705763 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.706859 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.721235 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5797d7d97c-twwkv"] Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.772846 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-combined-ca-bundle\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.772952 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-public-tls-certs\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.773025 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-config\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.773438 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-ovndb-tls-certs\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.773518 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-httpd-config\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.773544 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-internal-tls-certs\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.773586 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmq5b\" (UniqueName: \"kubernetes.io/projected/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-kube-api-access-bmq5b\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.875800 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-public-tls-certs\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.875867 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-config\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.875956 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-ovndb-tls-certs\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.875979 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-httpd-config\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.875997 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-internal-tls-certs\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.876022 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmq5b\" (UniqueName: \"kubernetes.io/projected/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-kube-api-access-bmq5b\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.876057 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-combined-ca-bundle\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.885358 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-httpd-config\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.886213 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-ovndb-tls-certs\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.890491 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-combined-ca-bundle\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.896796 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-public-tls-certs\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.917876 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-internal-tls-certs\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.918230 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-config\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.923128 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmq5b\" (UniqueName: \"kubernetes.io/projected/c26b529d-4e2d-489e-a15a-e9344e5cb5cd-kube-api-access-bmq5b\") pod \"neutron-5797d7d97c-twwkv\" (UID: \"c26b529d-4e2d-489e-a15a-e9344e5cb5cd\") " pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.935631 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" event={"ID":"55f19b5f-9663-427b-a68c-7ff5888fe36e","Type":"ContainerStarted","Data":"41964c6590b6c53ac4333204c69454af4dc6322f8da092e8cfe20c1d83997e5c"} Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.935701 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:02 crc kubenswrapper[5034]: I0105 23:24:02.970027 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" podStartSLOduration=2.969997089 podStartE2EDuration="2.969997089s" podCreationTimestamp="2026-01-05 23:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:24:02.963232727 +0000 UTC m=+5535.335232166" watchObservedRunningTime="2026-01-05 23:24:02.969997089 +0000 UTC m=+5535.341996528" Jan 05 23:24:03 crc kubenswrapper[5034]: I0105 23:24:03.037478 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:03 crc kubenswrapper[5034]: I0105 23:24:03.671763 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5797d7d97c-twwkv"] Jan 05 23:24:03 crc kubenswrapper[5034]: W0105 23:24:03.674427 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc26b529d_4e2d_489e_a15a_e9344e5cb5cd.slice/crio-7190821c69b51a5f9230481ed14de6d8da584646318e0727476fadb4fca5a234 WatchSource:0}: Error finding container 7190821c69b51a5f9230481ed14de6d8da584646318e0727476fadb4fca5a234: Status 404 returned error can't find the container with id 7190821c69b51a5f9230481ed14de6d8da584646318e0727476fadb4fca5a234 Jan 05 23:24:03 crc kubenswrapper[5034]: I0105 23:24:03.945560 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5797d7d97c-twwkv" event={"ID":"c26b529d-4e2d-489e-a15a-e9344e5cb5cd","Type":"ContainerStarted","Data":"6db02fece7d2289d4245c7afb8e8c19de9559a637dbc736e2c83c6a5c0d9fd9d"} Jan 05 23:24:03 crc kubenswrapper[5034]: I0105 23:24:03.945959 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5797d7d97c-twwkv" event={"ID":"c26b529d-4e2d-489e-a15a-e9344e5cb5cd","Type":"ContainerStarted","Data":"7190821c69b51a5f9230481ed14de6d8da584646318e0727476fadb4fca5a234"} Jan 05 23:24:04 crc kubenswrapper[5034]: I0105 23:24:04.955524 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5797d7d97c-twwkv" event={"ID":"c26b529d-4e2d-489e-a15a-e9344e5cb5cd","Type":"ContainerStarted","Data":"c38aa3692438374e2eb1ce9b6e12943b3117ff06111eff5d38b9aee42532b9ed"} Jan 05 23:24:04 crc kubenswrapper[5034]: I0105 23:24:04.983273 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5797d7d97c-twwkv" podStartSLOduration=2.983246885 podStartE2EDuration="2.983246885s" podCreationTimestamp="2026-01-05 23:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:24:04.977552483 +0000 UTC m=+5537.349551922" watchObservedRunningTime="2026-01-05 23:24:04.983246885 +0000 UTC m=+5537.355246324" Jan 05 23:24:05 crc kubenswrapper[5034]: I0105 23:24:05.962984 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:10 crc kubenswrapper[5034]: I0105 23:24:10.377309 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:10 crc kubenswrapper[5034]: I0105 23:24:10.444965 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8fd969f7-5b8sv"] Jan 05 23:24:10 crc kubenswrapper[5034]: I0105 23:24:10.445486 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" podUID="12fcda91-6451-4fd1-a777-6d02f3c82aee" containerName="dnsmasq-dns" containerID="cri-o://c982a86da6dec8cb9e305b3b8c45177f1297c47ff259f45934250ba5e25e51dd" gracePeriod=10 Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.020102 5034 generic.go:334] "Generic (PLEG): container finished" podID="12fcda91-6451-4fd1-a777-6d02f3c82aee" containerID="c982a86da6dec8cb9e305b3b8c45177f1297c47ff259f45934250ba5e25e51dd" exitCode=0 Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.020201 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" event={"ID":"12fcda91-6451-4fd1-a777-6d02f3c82aee","Type":"ContainerDied","Data":"c982a86da6dec8cb9e305b3b8c45177f1297c47ff259f45934250ba5e25e51dd"} Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.127450 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.266251 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-nb\") pod \"12fcda91-6451-4fd1-a777-6d02f3c82aee\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.266358 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-dns-svc\") pod \"12fcda91-6451-4fd1-a777-6d02f3c82aee\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.266428 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgcqr\" (UniqueName: \"kubernetes.io/projected/12fcda91-6451-4fd1-a777-6d02f3c82aee-kube-api-access-tgcqr\") pod \"12fcda91-6451-4fd1-a777-6d02f3c82aee\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.266449 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-config\") pod \"12fcda91-6451-4fd1-a777-6d02f3c82aee\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.266503 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-sb\") pod \"12fcda91-6451-4fd1-a777-6d02f3c82aee\" (UID: \"12fcda91-6451-4fd1-a777-6d02f3c82aee\") " Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.277694 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12fcda91-6451-4fd1-a777-6d02f3c82aee-kube-api-access-tgcqr" (OuterVolumeSpecName: "kube-api-access-tgcqr") pod "12fcda91-6451-4fd1-a777-6d02f3c82aee" (UID: "12fcda91-6451-4fd1-a777-6d02f3c82aee"). InnerVolumeSpecName "kube-api-access-tgcqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.316963 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-config" (OuterVolumeSpecName: "config") pod "12fcda91-6451-4fd1-a777-6d02f3c82aee" (UID: "12fcda91-6451-4fd1-a777-6d02f3c82aee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.319861 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "12fcda91-6451-4fd1-a777-6d02f3c82aee" (UID: "12fcda91-6451-4fd1-a777-6d02f3c82aee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.320070 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "12fcda91-6451-4fd1-a777-6d02f3c82aee" (UID: "12fcda91-6451-4fd1-a777-6d02f3c82aee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.329614 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "12fcda91-6451-4fd1-a777-6d02f3c82aee" (UID: "12fcda91-6451-4fd1-a777-6d02f3c82aee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.368173 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.368204 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.368214 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgcqr\" (UniqueName: \"kubernetes.io/projected/12fcda91-6451-4fd1-a777-6d02f3c82aee-kube-api-access-tgcqr\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.368224 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:11 crc kubenswrapper[5034]: I0105 23:24:11.368234 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12fcda91-6451-4fd1-a777-6d02f3c82aee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:12 crc kubenswrapper[5034]: I0105 23:24:12.030584 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" event={"ID":"12fcda91-6451-4fd1-a777-6d02f3c82aee","Type":"ContainerDied","Data":"4816d32655c9b071d8b127ef5953b19d1edecccc9796f0abd36472bef6b04a00"} Jan 05 23:24:12 crc kubenswrapper[5034]: I0105 23:24:12.030643 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8fd969f7-5b8sv" Jan 05 23:24:12 crc kubenswrapper[5034]: I0105 23:24:12.030663 5034 scope.go:117] "RemoveContainer" containerID="c982a86da6dec8cb9e305b3b8c45177f1297c47ff259f45934250ba5e25e51dd" Jan 05 23:24:12 crc kubenswrapper[5034]: I0105 23:24:12.055014 5034 scope.go:117] "RemoveContainer" containerID="0604a3083f6499434cb8d51d50f8a1052fc51afa84ed1734662c980e94b3e4d0" Jan 05 23:24:12 crc kubenswrapper[5034]: I0105 23:24:12.058767 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8fd969f7-5b8sv"] Jan 05 23:24:12 crc kubenswrapper[5034]: I0105 23:24:12.066592 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d8fd969f7-5b8sv"] Jan 05 23:24:13 crc kubenswrapper[5034]: I0105 23:24:13.849243 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12fcda91-6451-4fd1-a777-6d02f3c82aee" path="/var/lib/kubelet/pods/12fcda91-6451-4fd1-a777-6d02f3c82aee/volumes" Jan 05 23:24:23 crc kubenswrapper[5034]: I0105 23:24:23.051142 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qhmmf"] Jan 05 23:24:23 crc kubenswrapper[5034]: I0105 23:24:23.057547 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qhmmf"] Jan 05 23:24:23 crc kubenswrapper[5034]: I0105 23:24:23.848739 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c796c8e-6659-486b-8393-4d934e907b28" path="/var/lib/kubelet/pods/4c796c8e-6659-486b-8393-4d934e907b28/volumes" Jan 05 23:24:30 crc kubenswrapper[5034]: I0105 23:24:30.662050 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:33 crc kubenswrapper[5034]: I0105 23:24:33.049898 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5797d7d97c-twwkv" Jan 05 23:24:33 crc kubenswrapper[5034]: I0105 23:24:33.117273 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8fff78d-x568c"] Jan 05 23:24:33 crc kubenswrapper[5034]: I0105 23:24:33.117836 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8fff78d-x568c" podUID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" containerName="neutron-api" containerID="cri-o://18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874" gracePeriod=30 Jan 05 23:24:33 crc kubenswrapper[5034]: I0105 23:24:33.117952 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8fff78d-x568c" podUID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" containerName="neutron-httpd" containerID="cri-o://0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7" gracePeriod=30 Jan 05 23:24:34 crc kubenswrapper[5034]: I0105 23:24:34.217458 5034 generic.go:334] "Generic (PLEG): container finished" podID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" containerID="0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7" exitCode=0 Jan 05 23:24:34 crc kubenswrapper[5034]: I0105 23:24:34.217550 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fff78d-x568c" event={"ID":"fbd527dc-29b7-4120-a2f4-ff99c4ca660f","Type":"ContainerDied","Data":"0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7"} Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.186000 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.246672 5034 generic.go:334] "Generic (PLEG): container finished" podID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" containerID="18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874" exitCode=0 Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.246737 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8fff78d-x568c" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.246733 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fff78d-x568c" event={"ID":"fbd527dc-29b7-4120-a2f4-ff99c4ca660f","Type":"ContainerDied","Data":"18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874"} Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.246809 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fff78d-x568c" event={"ID":"fbd527dc-29b7-4120-a2f4-ff99c4ca660f","Type":"ContainerDied","Data":"810692ff152185512f973155610922cfae5e8a1f2404819d0292b2832b859826"} Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.246834 5034 scope.go:117] "RemoveContainer" containerID="0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.255411 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dxnk\" (UniqueName: \"kubernetes.io/projected/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-kube-api-access-4dxnk\") pod \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.255519 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-config\") pod \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.255598 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-httpd-config\") pod \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.255637 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-ovndb-tls-certs\") pod \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.255671 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-combined-ca-bundle\") pod \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\" (UID: \"fbd527dc-29b7-4120-a2f4-ff99c4ca660f\") " Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.261545 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fbd527dc-29b7-4120-a2f4-ff99c4ca660f" (UID: "fbd527dc-29b7-4120-a2f4-ff99c4ca660f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.261620 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-kube-api-access-4dxnk" (OuterVolumeSpecName: "kube-api-access-4dxnk") pod "fbd527dc-29b7-4120-a2f4-ff99c4ca660f" (UID: "fbd527dc-29b7-4120-a2f4-ff99c4ca660f"). InnerVolumeSpecName "kube-api-access-4dxnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.267942 5034 scope.go:117] "RemoveContainer" containerID="18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.300426 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-config" (OuterVolumeSpecName: "config") pod "fbd527dc-29b7-4120-a2f4-ff99c4ca660f" (UID: "fbd527dc-29b7-4120-a2f4-ff99c4ca660f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.301631 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbd527dc-29b7-4120-a2f4-ff99c4ca660f" (UID: "fbd527dc-29b7-4120-a2f4-ff99c4ca660f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.322051 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fbd527dc-29b7-4120-a2f4-ff99c4ca660f" (UID: "fbd527dc-29b7-4120-a2f4-ff99c4ca660f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.354492 5034 scope.go:117] "RemoveContainer" containerID="0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7" Jan 05 23:24:37 crc kubenswrapper[5034]: E0105 23:24:37.355499 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7\": container with ID starting with 0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7 not found: ID does not exist" containerID="0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.355562 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7"} err="failed to get container status \"0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7\": rpc error: code = NotFound desc = could not find container \"0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7\": container with ID starting with 0aadf2ff26d88d280a4b724d7ca13de1442fcfe81d2cd77da0c147e4550493e7 not found: ID does not exist" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.355598 5034 scope.go:117] "RemoveContainer" containerID="18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874" Jan 05 23:24:37 crc kubenswrapper[5034]: E0105 23:24:37.356304 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874\": container with ID starting with 18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874 not found: ID does not exist" containerID="18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.356343 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874"} err="failed to get container status \"18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874\": rpc error: code = NotFound desc = could not find container \"18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874\": container with ID starting with 18051b7d05743338384ed06d6d14054e5958583325f27218f226fee7b87bb874 not found: ID does not exist" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.357316 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.357343 5034 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.357352 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.357363 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dxnk\" (UniqueName: \"kubernetes.io/projected/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-kube-api-access-4dxnk\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.357373 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbd527dc-29b7-4120-a2f4-ff99c4ca660f-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.578575 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8fff78d-x568c"] Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.586214 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8fff78d-x568c"] Jan 05 23:24:37 crc kubenswrapper[5034]: I0105 23:24:37.852381 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" path="/var/lib/kubelet/pods/fbd527dc-29b7-4120-a2f4-ff99c4ca660f/volumes" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.626491 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4bzn4"] Jan 05 23:24:48 crc kubenswrapper[5034]: E0105 23:24:48.627930 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" containerName="neutron-api" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.627986 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" containerName="neutron-api" Jan 05 23:24:48 crc kubenswrapper[5034]: E0105 23:24:48.628009 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12fcda91-6451-4fd1-a777-6d02f3c82aee" containerName="dnsmasq-dns" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.628021 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="12fcda91-6451-4fd1-a777-6d02f3c82aee" containerName="dnsmasq-dns" Jan 05 23:24:48 crc kubenswrapper[5034]: E0105 23:24:48.628062 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12fcda91-6451-4fd1-a777-6d02f3c82aee" containerName="init" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.628074 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="12fcda91-6451-4fd1-a777-6d02f3c82aee" containerName="init" Jan 05 23:24:48 crc kubenswrapper[5034]: E0105 23:24:48.628115 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" containerName="neutron-httpd" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.628125 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" containerName="neutron-httpd" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.628539 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="12fcda91-6451-4fd1-a777-6d02f3c82aee" containerName="dnsmasq-dns" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.628587 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" containerName="neutron-httpd" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.628602 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd527dc-29b7-4120-a2f4-ff99c4ca660f" containerName="neutron-api" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.629622 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.634519 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.634817 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dgm9b" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.635125 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.635458 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.652666 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4bzn4"] Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.657893 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.786221 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848dfd49-ppw2r"] Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.797969 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848dfd49-ppw2r"] Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.798124 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.821173 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-swiftconf\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.821239 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/521454f6-b1af-4a98-bf39-89711fb8840b-etc-swift\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.821265 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-scripts\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.821308 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-dispersionconf\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.821394 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-combined-ca-bundle\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.821417 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldmnk\" (UniqueName: \"kubernetes.io/projected/521454f6-b1af-4a98-bf39-89711fb8840b-kube-api-access-ldmnk\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.821456 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-ring-data-devices\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.922635 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-dispersionconf\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.922952 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxd2\" (UniqueName: \"kubernetes.io/projected/fdf89f3b-148b-4991-aee0-43c4b9494f5d-kube-api-access-pnxd2\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923006 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-sb\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923026 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-combined-ca-bundle\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923043 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldmnk\" (UniqueName: \"kubernetes.io/projected/521454f6-b1af-4a98-bf39-89711fb8840b-kube-api-access-ldmnk\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923145 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-ring-data-devices\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923164 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-dns-svc\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923278 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-config\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923318 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-nb\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923344 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-swiftconf\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923363 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/521454f6-b1af-4a98-bf39-89711fb8840b-etc-swift\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923387 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-scripts\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.923981 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/521454f6-b1af-4a98-bf39-89711fb8840b-etc-swift\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.924321 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-ring-data-devices\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.924817 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-scripts\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.935201 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-dispersionconf\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.936471 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-swiftconf\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.942273 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-combined-ca-bundle\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.947258 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldmnk\" (UniqueName: \"kubernetes.io/projected/521454f6-b1af-4a98-bf39-89711fb8840b-kube-api-access-ldmnk\") pod \"swift-ring-rebalance-4bzn4\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:48 crc kubenswrapper[5034]: I0105 23:24:48.955685 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.025283 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-config\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.025405 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-nb\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.026828 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-nb\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.027231 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxd2\" (UniqueName: \"kubernetes.io/projected/fdf89f3b-148b-4991-aee0-43c4b9494f5d-kube-api-access-pnxd2\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.027270 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-sb\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.027273 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-config\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.027365 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-dns-svc\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.028252 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-sb\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.031576 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-dns-svc\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.058168 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxd2\" (UniqueName: \"kubernetes.io/projected/fdf89f3b-148b-4991-aee0-43c4b9494f5d-kube-api-access-pnxd2\") pod \"dnsmasq-dns-848dfd49-ppw2r\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.120874 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.506039 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848dfd49-ppw2r"] Jan 05 23:24:49 crc kubenswrapper[5034]: I0105 23:24:49.576290 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4bzn4"] Jan 05 23:24:50 crc kubenswrapper[5034]: I0105 23:24:50.373824 5034 generic.go:334] "Generic (PLEG): container finished" podID="fdf89f3b-148b-4991-aee0-43c4b9494f5d" containerID="5e3837c376fce68be89cde05807c18192936559ed853c5f195b032db9fcc76d3" exitCode=0 Jan 05 23:24:50 crc kubenswrapper[5034]: I0105 23:24:50.374324 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" event={"ID":"fdf89f3b-148b-4991-aee0-43c4b9494f5d","Type":"ContainerDied","Data":"5e3837c376fce68be89cde05807c18192936559ed853c5f195b032db9fcc76d3"} Jan 05 23:24:50 crc kubenswrapper[5034]: I0105 23:24:50.374371 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" event={"ID":"fdf89f3b-148b-4991-aee0-43c4b9494f5d","Type":"ContainerStarted","Data":"7aef6b5dd9ef19d780f68a3ffe257ed02f680617da6fa31e86f87cb98973c76d"} Jan 05 23:24:50 crc kubenswrapper[5034]: I0105 23:24:50.380290 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4bzn4" event={"ID":"521454f6-b1af-4a98-bf39-89711fb8840b","Type":"ContainerStarted","Data":"3cc15c9c5d1f8b99f1cb2ec44587edc85ea6e0c4262020ac076845ab52e817f2"} Jan 05 23:24:50 crc kubenswrapper[5034]: I0105 23:24:50.380354 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4bzn4" event={"ID":"521454f6-b1af-4a98-bf39-89711fb8840b","Type":"ContainerStarted","Data":"94e42ef29da0ff9f13be7972437aa550dcbe8bcd524c18e1578fdc8e297b9b36"} Jan 05 23:24:50 crc kubenswrapper[5034]: I0105 23:24:50.433838 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4bzn4" podStartSLOduration=2.433800706 podStartE2EDuration="2.433800706s" podCreationTimestamp="2026-01-05 23:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:24:50.426869169 +0000 UTC m=+5582.798868618" watchObservedRunningTime="2026-01-05 23:24:50.433800706 +0000 UTC m=+5582.805800145" Jan 05 23:24:50 crc kubenswrapper[5034]: I0105 23:24:50.470213 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:24:50 crc kubenswrapper[5034]: I0105 23:24:50.470281 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:24:51 crc kubenswrapper[5034]: I0105 23:24:51.390669 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" event={"ID":"fdf89f3b-148b-4991-aee0-43c4b9494f5d","Type":"ContainerStarted","Data":"b515dec3464a80c6ec438466007b6c57bad90a36cda0274109ce423e39e01b24"} Jan 05 23:24:51 crc kubenswrapper[5034]: I0105 23:24:51.391021 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:51 crc kubenswrapper[5034]: I0105 23:24:51.414839 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" podStartSLOduration=3.414801076 podStartE2EDuration="3.414801076s" podCreationTimestamp="2026-01-05 23:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:24:51.405997477 +0000 UTC m=+5583.777996916" watchObservedRunningTime="2026-01-05 23:24:51.414801076 +0000 UTC m=+5583.786800525" Jan 05 23:24:52 crc kubenswrapper[5034]: I0105 23:24:52.953334 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-57d5547d58-mm9qr"] Jan 05 23:24:52 crc kubenswrapper[5034]: I0105 23:24:52.955623 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:52 crc kubenswrapper[5034]: I0105 23:24:52.960191 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 05 23:24:52 crc kubenswrapper[5034]: I0105 23:24:52.960437 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 05 23:24:52 crc kubenswrapper[5034]: I0105 23:24:52.961641 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 05 23:24:52 crc kubenswrapper[5034]: I0105 23:24:52.964532 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-57d5547d58-mm9qr"] Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.032813 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-combined-ca-bundle\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.032920 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-internal-tls-certs\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.032978 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-config-data\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.033005 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/707ebc8c-26aa-41c2-8dd4-14cf8df07600-run-httpd\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.033108 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/707ebc8c-26aa-41c2-8dd4-14cf8df07600-log-httpd\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.033128 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/707ebc8c-26aa-41c2-8dd4-14cf8df07600-etc-swift\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.033164 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfkkb\" (UniqueName: \"kubernetes.io/projected/707ebc8c-26aa-41c2-8dd4-14cf8df07600-kube-api-access-wfkkb\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.033186 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-public-tls-certs\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.134580 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/707ebc8c-26aa-41c2-8dd4-14cf8df07600-log-httpd\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.134639 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/707ebc8c-26aa-41c2-8dd4-14cf8df07600-etc-swift\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.134675 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfkkb\" (UniqueName: \"kubernetes.io/projected/707ebc8c-26aa-41c2-8dd4-14cf8df07600-kube-api-access-wfkkb\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.134698 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-public-tls-certs\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.134730 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-combined-ca-bundle\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.134802 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-internal-tls-certs\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.134838 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-config-data\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.134870 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/707ebc8c-26aa-41c2-8dd4-14cf8df07600-run-httpd\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.135327 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/707ebc8c-26aa-41c2-8dd4-14cf8df07600-log-httpd\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.135349 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/707ebc8c-26aa-41c2-8dd4-14cf8df07600-run-httpd\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.141909 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/707ebc8c-26aa-41c2-8dd4-14cf8df07600-etc-swift\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.142828 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-combined-ca-bundle\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.142942 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-public-tls-certs\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.142878 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-internal-tls-certs\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.146060 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707ebc8c-26aa-41c2-8dd4-14cf8df07600-config-data\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.155354 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfkkb\" (UniqueName: \"kubernetes.io/projected/707ebc8c-26aa-41c2-8dd4-14cf8df07600-kube-api-access-wfkkb\") pod \"swift-proxy-57d5547d58-mm9qr\" (UID: \"707ebc8c-26aa-41c2-8dd4-14cf8df07600\") " pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.278646 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:53 crc kubenswrapper[5034]: I0105 23:24:53.948168 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-57d5547d58-mm9qr"] Jan 05 23:24:54 crc kubenswrapper[5034]: I0105 23:24:54.417063 5034 generic.go:334] "Generic (PLEG): container finished" podID="521454f6-b1af-4a98-bf39-89711fb8840b" containerID="3cc15c9c5d1f8b99f1cb2ec44587edc85ea6e0c4262020ac076845ab52e817f2" exitCode=0 Jan 05 23:24:54 crc kubenswrapper[5034]: I0105 23:24:54.417305 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4bzn4" event={"ID":"521454f6-b1af-4a98-bf39-89711fb8840b","Type":"ContainerDied","Data":"3cc15c9c5d1f8b99f1cb2ec44587edc85ea6e0c4262020ac076845ab52e817f2"} Jan 05 23:24:54 crc kubenswrapper[5034]: I0105 23:24:54.422902 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57d5547d58-mm9qr" event={"ID":"707ebc8c-26aa-41c2-8dd4-14cf8df07600","Type":"ContainerStarted","Data":"5f6f5cf9741de0a9b51736538a56c3679240df048e47c8a18cd69fcb95f548b9"} Jan 05 23:24:54 crc kubenswrapper[5034]: I0105 23:24:54.423060 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57d5547d58-mm9qr" event={"ID":"707ebc8c-26aa-41c2-8dd4-14cf8df07600","Type":"ContainerStarted","Data":"83298688f9159d31b82239eb7975a3f43faf17e2621513bdecbec191ea3a3723"} Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.441134 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57d5547d58-mm9qr" event={"ID":"707ebc8c-26aa-41c2-8dd4-14cf8df07600","Type":"ContainerStarted","Data":"da00477282b7dacca9e01bef88b8f2d99eefc88c3cb3ed34830a5b883cade1e0"} Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.441275 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.441316 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.477301 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-57d5547d58-mm9qr" podStartSLOduration=3.477271197 podStartE2EDuration="3.477271197s" podCreationTimestamp="2026-01-05 23:24:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:24:55.473328765 +0000 UTC m=+5587.845328204" watchObservedRunningTime="2026-01-05 23:24:55.477271197 +0000 UTC m=+5587.849270636" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.815834 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.899791 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-dispersionconf\") pod \"521454f6-b1af-4a98-bf39-89711fb8840b\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.899937 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-ring-data-devices\") pod \"521454f6-b1af-4a98-bf39-89711fb8840b\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.899968 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-scripts\") pod \"521454f6-b1af-4a98-bf39-89711fb8840b\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.900000 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-combined-ca-bundle\") pod \"521454f6-b1af-4a98-bf39-89711fb8840b\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.900232 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/521454f6-b1af-4a98-bf39-89711fb8840b-etc-swift\") pod \"521454f6-b1af-4a98-bf39-89711fb8840b\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.900579 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "521454f6-b1af-4a98-bf39-89711fb8840b" (UID: "521454f6-b1af-4a98-bf39-89711fb8840b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.901095 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/521454f6-b1af-4a98-bf39-89711fb8840b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "521454f6-b1af-4a98-bf39-89711fb8840b" (UID: "521454f6-b1af-4a98-bf39-89711fb8840b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.901137 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-swiftconf\") pod \"521454f6-b1af-4a98-bf39-89711fb8840b\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.901197 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldmnk\" (UniqueName: \"kubernetes.io/projected/521454f6-b1af-4a98-bf39-89711fb8840b-kube-api-access-ldmnk\") pod \"521454f6-b1af-4a98-bf39-89711fb8840b\" (UID: \"521454f6-b1af-4a98-bf39-89711fb8840b\") " Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.902158 5034 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/521454f6-b1af-4a98-bf39-89711fb8840b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.902229 5034 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.905030 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521454f6-b1af-4a98-bf39-89711fb8840b-kube-api-access-ldmnk" (OuterVolumeSpecName: "kube-api-access-ldmnk") pod "521454f6-b1af-4a98-bf39-89711fb8840b" (UID: "521454f6-b1af-4a98-bf39-89711fb8840b"). InnerVolumeSpecName "kube-api-access-ldmnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.917844 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "521454f6-b1af-4a98-bf39-89711fb8840b" (UID: "521454f6-b1af-4a98-bf39-89711fb8840b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.923955 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "521454f6-b1af-4a98-bf39-89711fb8840b" (UID: "521454f6-b1af-4a98-bf39-89711fb8840b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.924281 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-scripts" (OuterVolumeSpecName: "scripts") pod "521454f6-b1af-4a98-bf39-89711fb8840b" (UID: "521454f6-b1af-4a98-bf39-89711fb8840b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:24:55 crc kubenswrapper[5034]: I0105 23:24:55.929641 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "521454f6-b1af-4a98-bf39-89711fb8840b" (UID: "521454f6-b1af-4a98-bf39-89711fb8840b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:24:56 crc kubenswrapper[5034]: I0105 23:24:56.004895 5034 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:56 crc kubenswrapper[5034]: I0105 23:24:56.004941 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldmnk\" (UniqueName: \"kubernetes.io/projected/521454f6-b1af-4a98-bf39-89711fb8840b-kube-api-access-ldmnk\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:56 crc kubenswrapper[5034]: I0105 23:24:56.004952 5034 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:56 crc kubenswrapper[5034]: I0105 23:24:56.004964 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/521454f6-b1af-4a98-bf39-89711fb8840b-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:56 crc kubenswrapper[5034]: I0105 23:24:56.004972 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521454f6-b1af-4a98-bf39-89711fb8840b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:56 crc kubenswrapper[5034]: I0105 23:24:56.450176 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4bzn4" Jan 05 23:24:56 crc kubenswrapper[5034]: I0105 23:24:56.450175 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4bzn4" event={"ID":"521454f6-b1af-4a98-bf39-89711fb8840b","Type":"ContainerDied","Data":"94e42ef29da0ff9f13be7972437aa550dcbe8bcd524c18e1578fdc8e297b9b36"} Jan 05 23:24:56 crc kubenswrapper[5034]: I0105 23:24:56.450228 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e42ef29da0ff9f13be7972437aa550dcbe8bcd524c18e1578fdc8e297b9b36" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.123157 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.186639 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f894ffd67-2bd46"] Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.187208 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" podUID="55f19b5f-9663-427b-a68c-7ff5888fe36e" containerName="dnsmasq-dns" containerID="cri-o://41964c6590b6c53ac4333204c69454af4dc6322f8da092e8cfe20c1d83997e5c" gracePeriod=10 Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.429103 5034 scope.go:117] "RemoveContainer" containerID="d7d838d5144683cfdc034bd83586142c7200ae080f71b2460d00cc6a7b5ef9af" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.518428 5034 generic.go:334] "Generic (PLEG): container finished" podID="55f19b5f-9663-427b-a68c-7ff5888fe36e" containerID="41964c6590b6c53ac4333204c69454af4dc6322f8da092e8cfe20c1d83997e5c" exitCode=0 Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.519070 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" event={"ID":"55f19b5f-9663-427b-a68c-7ff5888fe36e","Type":"ContainerDied","Data":"41964c6590b6c53ac4333204c69454af4dc6322f8da092e8cfe20c1d83997e5c"} Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.682788 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.775736 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-config\") pod \"55f19b5f-9663-427b-a68c-7ff5888fe36e\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.775789 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-sb\") pod \"55f19b5f-9663-427b-a68c-7ff5888fe36e\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.775840 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-nb\") pod \"55f19b5f-9663-427b-a68c-7ff5888fe36e\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.775862 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npt9t\" (UniqueName: \"kubernetes.io/projected/55f19b5f-9663-427b-a68c-7ff5888fe36e-kube-api-access-npt9t\") pod \"55f19b5f-9663-427b-a68c-7ff5888fe36e\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.775887 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-dns-svc\") pod \"55f19b5f-9663-427b-a68c-7ff5888fe36e\" (UID: \"55f19b5f-9663-427b-a68c-7ff5888fe36e\") " Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.797313 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f19b5f-9663-427b-a68c-7ff5888fe36e-kube-api-access-npt9t" (OuterVolumeSpecName: "kube-api-access-npt9t") pod "55f19b5f-9663-427b-a68c-7ff5888fe36e" (UID: "55f19b5f-9663-427b-a68c-7ff5888fe36e"). InnerVolumeSpecName "kube-api-access-npt9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.825580 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-config" (OuterVolumeSpecName: "config") pod "55f19b5f-9663-427b-a68c-7ff5888fe36e" (UID: "55f19b5f-9663-427b-a68c-7ff5888fe36e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.828173 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55f19b5f-9663-427b-a68c-7ff5888fe36e" (UID: "55f19b5f-9663-427b-a68c-7ff5888fe36e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.840790 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55f19b5f-9663-427b-a68c-7ff5888fe36e" (UID: "55f19b5f-9663-427b-a68c-7ff5888fe36e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.853519 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55f19b5f-9663-427b-a68c-7ff5888fe36e" (UID: "55f19b5f-9663-427b-a68c-7ff5888fe36e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.878334 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.878368 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.878383 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.878394 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55f19b5f-9663-427b-a68c-7ff5888fe36e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:24:59 crc kubenswrapper[5034]: I0105 23:24:59.878403 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npt9t\" (UniqueName: \"kubernetes.io/projected/55f19b5f-9663-427b-a68c-7ff5888fe36e-kube-api-access-npt9t\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:00 crc kubenswrapper[5034]: I0105 23:25:00.530865 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" event={"ID":"55f19b5f-9663-427b-a68c-7ff5888fe36e","Type":"ContainerDied","Data":"2fe2d54da18002dd293488cb0ac8e4e40f65629096013df5b35bfd52992eff70"} Jan 05 23:25:00 crc kubenswrapper[5034]: I0105 23:25:00.530921 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f894ffd67-2bd46" Jan 05 23:25:00 crc kubenswrapper[5034]: I0105 23:25:00.530951 5034 scope.go:117] "RemoveContainer" containerID="41964c6590b6c53ac4333204c69454af4dc6322f8da092e8cfe20c1d83997e5c" Jan 05 23:25:00 crc kubenswrapper[5034]: I0105 23:25:00.565526 5034 scope.go:117] "RemoveContainer" containerID="8fee1ac5a0ce47609eb57f221cd00e3c3257318a377d3818dbd57489b6cdd372" Jan 05 23:25:00 crc kubenswrapper[5034]: I0105 23:25:00.573112 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f894ffd67-2bd46"] Jan 05 23:25:00 crc kubenswrapper[5034]: I0105 23:25:00.580012 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f894ffd67-2bd46"] Jan 05 23:25:01 crc kubenswrapper[5034]: I0105 23:25:01.849314 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f19b5f-9663-427b-a68c-7ff5888fe36e" path="/var/lib/kubelet/pods/55f19b5f-9663-427b-a68c-7ff5888fe36e/volumes" Jan 05 23:25:03 crc kubenswrapper[5034]: I0105 23:25:03.283211 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:25:03 crc kubenswrapper[5034]: I0105 23:25:03.285897 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-57d5547d58-mm9qr" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.351387 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-k9nvl"] Jan 05 23:25:09 crc kubenswrapper[5034]: E0105 23:25:09.352427 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f19b5f-9663-427b-a68c-7ff5888fe36e" containerName="dnsmasq-dns" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.352447 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f19b5f-9663-427b-a68c-7ff5888fe36e" containerName="dnsmasq-dns" Jan 05 23:25:09 crc kubenswrapper[5034]: E0105 23:25:09.352478 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f19b5f-9663-427b-a68c-7ff5888fe36e" containerName="init" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.352486 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f19b5f-9663-427b-a68c-7ff5888fe36e" containerName="init" Jan 05 23:25:09 crc kubenswrapper[5034]: E0105 23:25:09.352499 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521454f6-b1af-4a98-bf39-89711fb8840b" containerName="swift-ring-rebalance" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.352506 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="521454f6-b1af-4a98-bf39-89711fb8840b" containerName="swift-ring-rebalance" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.352719 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f19b5f-9663-427b-a68c-7ff5888fe36e" containerName="dnsmasq-dns" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.352736 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="521454f6-b1af-4a98-bf39-89711fb8840b" containerName="swift-ring-rebalance" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.353578 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k9nvl" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.374987 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k9nvl"] Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.452838 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8b1a-account-create-update-chql2"] Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.456145 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8b1a-account-create-update-chql2" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.459588 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.461913 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89db938a-84f1-4a3b-942a-322a862a9987-operator-scripts\") pod \"cinder-db-create-k9nvl\" (UID: \"89db938a-84f1-4a3b-942a-322a862a9987\") " pod="openstack/cinder-db-create-k9nvl" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.462118 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88pbn\" (UniqueName: \"kubernetes.io/projected/89db938a-84f1-4a3b-942a-322a862a9987-kube-api-access-88pbn\") pod \"cinder-db-create-k9nvl\" (UID: \"89db938a-84f1-4a3b-942a-322a862a9987\") " pod="openstack/cinder-db-create-k9nvl" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.469454 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8b1a-account-create-update-chql2"] Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.563748 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-operator-scripts\") pod \"cinder-8b1a-account-create-update-chql2\" (UID: \"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8\") " pod="openstack/cinder-8b1a-account-create-update-chql2" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.563801 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ngvh\" (UniqueName: \"kubernetes.io/projected/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-kube-api-access-9ngvh\") pod \"cinder-8b1a-account-create-update-chql2\" (UID: \"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8\") " pod="openstack/cinder-8b1a-account-create-update-chql2" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.564274 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88pbn\" (UniqueName: \"kubernetes.io/projected/89db938a-84f1-4a3b-942a-322a862a9987-kube-api-access-88pbn\") pod \"cinder-db-create-k9nvl\" (UID: \"89db938a-84f1-4a3b-942a-322a862a9987\") " pod="openstack/cinder-db-create-k9nvl" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.564355 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89db938a-84f1-4a3b-942a-322a862a9987-operator-scripts\") pod \"cinder-db-create-k9nvl\" (UID: \"89db938a-84f1-4a3b-942a-322a862a9987\") " pod="openstack/cinder-db-create-k9nvl" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.565222 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89db938a-84f1-4a3b-942a-322a862a9987-operator-scripts\") pod \"cinder-db-create-k9nvl\" (UID: \"89db938a-84f1-4a3b-942a-322a862a9987\") " pod="openstack/cinder-db-create-k9nvl" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.587796 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88pbn\" (UniqueName: \"kubernetes.io/projected/89db938a-84f1-4a3b-942a-322a862a9987-kube-api-access-88pbn\") pod \"cinder-db-create-k9nvl\" (UID: \"89db938a-84f1-4a3b-942a-322a862a9987\") " pod="openstack/cinder-db-create-k9nvl" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.666268 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-operator-scripts\") pod \"cinder-8b1a-account-create-update-chql2\" (UID: \"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8\") " pod="openstack/cinder-8b1a-account-create-update-chql2" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.666324 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ngvh\" (UniqueName: \"kubernetes.io/projected/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-kube-api-access-9ngvh\") pod \"cinder-8b1a-account-create-update-chql2\" (UID: \"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8\") " pod="openstack/cinder-8b1a-account-create-update-chql2" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.672527 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-operator-scripts\") pod \"cinder-8b1a-account-create-update-chql2\" (UID: \"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8\") " pod="openstack/cinder-8b1a-account-create-update-chql2" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.682650 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ngvh\" (UniqueName: \"kubernetes.io/projected/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-kube-api-access-9ngvh\") pod \"cinder-8b1a-account-create-update-chql2\" (UID: \"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8\") " pod="openstack/cinder-8b1a-account-create-update-chql2" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.684470 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k9nvl" Jan 05 23:25:09 crc kubenswrapper[5034]: I0105 23:25:09.780669 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8b1a-account-create-update-chql2" Jan 05 23:25:10 crc kubenswrapper[5034]: I0105 23:25:10.158492 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k9nvl"] Jan 05 23:25:10 crc kubenswrapper[5034]: W0105 23:25:10.164367 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89db938a_84f1_4a3b_942a_322a862a9987.slice/crio-0191f5b43d563e3b6e1b6d57924346c2a369821fc7e254d8c31ac7d0bf0df4e0 WatchSource:0}: Error finding container 0191f5b43d563e3b6e1b6d57924346c2a369821fc7e254d8c31ac7d0bf0df4e0: Status 404 returned error can't find the container with id 0191f5b43d563e3b6e1b6d57924346c2a369821fc7e254d8c31ac7d0bf0df4e0 Jan 05 23:25:10 crc kubenswrapper[5034]: I0105 23:25:10.168460 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8b1a-account-create-update-chql2"] Jan 05 23:25:10 crc kubenswrapper[5034]: W0105 23:25:10.170630 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa6fd90b_a398_4f8c_b3fe_17f18f6acba8.slice/crio-679eb6f5b9f9bd7546258a5032c97776a8937ebc6f552713bb6188d89c11f1b3 WatchSource:0}: Error finding container 679eb6f5b9f9bd7546258a5032c97776a8937ebc6f552713bb6188d89c11f1b3: Status 404 returned error can't find the container with id 679eb6f5b9f9bd7546258a5032c97776a8937ebc6f552713bb6188d89c11f1b3 Jan 05 23:25:10 crc kubenswrapper[5034]: I0105 23:25:10.634718 5034 generic.go:334] "Generic (PLEG): container finished" podID="fa6fd90b-a398-4f8c-b3fe-17f18f6acba8" containerID="43a47e5f6cf830d2a52fc7d42e0f8b4c4dc0ddc7efaa62a8c0ae879a5c9ea103" exitCode=0 Jan 05 23:25:10 crc kubenswrapper[5034]: I0105 23:25:10.635329 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8b1a-account-create-update-chql2" event={"ID":"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8","Type":"ContainerDied","Data":"43a47e5f6cf830d2a52fc7d42e0f8b4c4dc0ddc7efaa62a8c0ae879a5c9ea103"} Jan 05 23:25:10 crc kubenswrapper[5034]: I0105 23:25:10.635366 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8b1a-account-create-update-chql2" event={"ID":"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8","Type":"ContainerStarted","Data":"679eb6f5b9f9bd7546258a5032c97776a8937ebc6f552713bb6188d89c11f1b3"} Jan 05 23:25:10 crc kubenswrapper[5034]: I0105 23:25:10.637783 5034 generic.go:334] "Generic (PLEG): container finished" podID="89db938a-84f1-4a3b-942a-322a862a9987" containerID="1bc5612160501759aabec8a6f7b00314fadab142cd741789586a71bd904cce9f" exitCode=0 Jan 05 23:25:10 crc kubenswrapper[5034]: I0105 23:25:10.637819 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k9nvl" event={"ID":"89db938a-84f1-4a3b-942a-322a862a9987","Type":"ContainerDied","Data":"1bc5612160501759aabec8a6f7b00314fadab142cd741789586a71bd904cce9f"} Jan 05 23:25:10 crc kubenswrapper[5034]: I0105 23:25:10.637838 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k9nvl" event={"ID":"89db938a-84f1-4a3b-942a-322a862a9987","Type":"ContainerStarted","Data":"0191f5b43d563e3b6e1b6d57924346c2a369821fc7e254d8c31ac7d0bf0df4e0"} Jan 05 23:25:11 crc kubenswrapper[5034]: I0105 23:25:11.925166 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k9nvl" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.008243 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88pbn\" (UniqueName: \"kubernetes.io/projected/89db938a-84f1-4a3b-942a-322a862a9987-kube-api-access-88pbn\") pod \"89db938a-84f1-4a3b-942a-322a862a9987\" (UID: \"89db938a-84f1-4a3b-942a-322a862a9987\") " Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.008306 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89db938a-84f1-4a3b-942a-322a862a9987-operator-scripts\") pod \"89db938a-84f1-4a3b-942a-322a862a9987\" (UID: \"89db938a-84f1-4a3b-942a-322a862a9987\") " Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.009060 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89db938a-84f1-4a3b-942a-322a862a9987-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89db938a-84f1-4a3b-942a-322a862a9987" (UID: "89db938a-84f1-4a3b-942a-322a862a9987"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.015509 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89db938a-84f1-4a3b-942a-322a862a9987-kube-api-access-88pbn" (OuterVolumeSpecName: "kube-api-access-88pbn") pod "89db938a-84f1-4a3b-942a-322a862a9987" (UID: "89db938a-84f1-4a3b-942a-322a862a9987"). InnerVolumeSpecName "kube-api-access-88pbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.071389 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8b1a-account-create-update-chql2" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.110855 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88pbn\" (UniqueName: \"kubernetes.io/projected/89db938a-84f1-4a3b-942a-322a862a9987-kube-api-access-88pbn\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.110894 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89db938a-84f1-4a3b-942a-322a862a9987-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.212721 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ngvh\" (UniqueName: \"kubernetes.io/projected/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-kube-api-access-9ngvh\") pod \"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8\" (UID: \"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8\") " Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.213195 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-operator-scripts\") pod \"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8\" (UID: \"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8\") " Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.214164 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa6fd90b-a398-4f8c-b3fe-17f18f6acba8" (UID: "fa6fd90b-a398-4f8c-b3fe-17f18f6acba8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.215828 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-kube-api-access-9ngvh" (OuterVolumeSpecName: "kube-api-access-9ngvh") pod "fa6fd90b-a398-4f8c-b3fe-17f18f6acba8" (UID: "fa6fd90b-a398-4f8c-b3fe-17f18f6acba8"). InnerVolumeSpecName "kube-api-access-9ngvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.315263 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ngvh\" (UniqueName: \"kubernetes.io/projected/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-kube-api-access-9ngvh\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.315316 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.658306 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8b1a-account-create-update-chql2" event={"ID":"fa6fd90b-a398-4f8c-b3fe-17f18f6acba8","Type":"ContainerDied","Data":"679eb6f5b9f9bd7546258a5032c97776a8937ebc6f552713bb6188d89c11f1b3"} Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.658362 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679eb6f5b9f9bd7546258a5032c97776a8937ebc6f552713bb6188d89c11f1b3" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.658256 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8b1a-account-create-update-chql2" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.660303 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k9nvl" event={"ID":"89db938a-84f1-4a3b-942a-322a862a9987","Type":"ContainerDied","Data":"0191f5b43d563e3b6e1b6d57924346c2a369821fc7e254d8c31ac7d0bf0df4e0"} Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.660460 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0191f5b43d563e3b6e1b6d57924346c2a369821fc7e254d8c31ac7d0bf0df4e0" Jan 05 23:25:12 crc kubenswrapper[5034]: I0105 23:25:12.660354 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k9nvl" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.759067 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xcgcs"] Jan 05 23:25:14 crc kubenswrapper[5034]: E0105 23:25:14.760334 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa6fd90b-a398-4f8c-b3fe-17f18f6acba8" containerName="mariadb-account-create-update" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.760354 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa6fd90b-a398-4f8c-b3fe-17f18f6acba8" containerName="mariadb-account-create-update" Jan 05 23:25:14 crc kubenswrapper[5034]: E0105 23:25:14.760495 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89db938a-84f1-4a3b-942a-322a862a9987" containerName="mariadb-database-create" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.760508 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="89db938a-84f1-4a3b-942a-322a862a9987" containerName="mariadb-database-create" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.760904 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="89db938a-84f1-4a3b-942a-322a862a9987" containerName="mariadb-database-create" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.760936 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa6fd90b-a398-4f8c-b3fe-17f18f6acba8" containerName="mariadb-account-create-update" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.761979 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.765725 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.765942 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.765985 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cgcz5" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.772904 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xcgcs"] Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.864104 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-config-data\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.864197 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f351d493-b50b-4af9-bdce-a40e38e34d49-etc-machine-id\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.864229 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-db-sync-config-data\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.864278 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knttm\" (UniqueName: \"kubernetes.io/projected/f351d493-b50b-4af9-bdce-a40e38e34d49-kube-api-access-knttm\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.864327 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-combined-ca-bundle\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.864369 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-scripts\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.966250 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-combined-ca-bundle\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.966325 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-scripts\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.966376 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-config-data\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.966414 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f351d493-b50b-4af9-bdce-a40e38e34d49-etc-machine-id\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.966439 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-db-sync-config-data\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.966499 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knttm\" (UniqueName: \"kubernetes.io/projected/f351d493-b50b-4af9-bdce-a40e38e34d49-kube-api-access-knttm\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.967208 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f351d493-b50b-4af9-bdce-a40e38e34d49-etc-machine-id\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.978697 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-scripts\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.978888 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-combined-ca-bundle\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.979631 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-db-sync-config-data\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.979830 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-config-data\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:14 crc kubenswrapper[5034]: I0105 23:25:14.984096 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knttm\" (UniqueName: \"kubernetes.io/projected/f351d493-b50b-4af9-bdce-a40e38e34d49-kube-api-access-knttm\") pod \"cinder-db-sync-xcgcs\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:15 crc kubenswrapper[5034]: I0105 23:25:15.140749 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:15 crc kubenswrapper[5034]: I0105 23:25:15.585302 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xcgcs"] Jan 05 23:25:15 crc kubenswrapper[5034]: W0105 23:25:15.592609 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf351d493_b50b_4af9_bdce_a40e38e34d49.slice/crio-d6c71d4d71d9cdb63ec301d83e2c761f8c2d91f94b153a90a1230ff876313fe8 WatchSource:0}: Error finding container d6c71d4d71d9cdb63ec301d83e2c761f8c2d91f94b153a90a1230ff876313fe8: Status 404 returned error can't find the container with id d6c71d4d71d9cdb63ec301d83e2c761f8c2d91f94b153a90a1230ff876313fe8 Jan 05 23:25:15 crc kubenswrapper[5034]: I0105 23:25:15.704289 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xcgcs" event={"ID":"f351d493-b50b-4af9-bdce-a40e38e34d49","Type":"ContainerStarted","Data":"d6c71d4d71d9cdb63ec301d83e2c761f8c2d91f94b153a90a1230ff876313fe8"} Jan 05 23:25:16 crc kubenswrapper[5034]: I0105 23:25:16.715622 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xcgcs" event={"ID":"f351d493-b50b-4af9-bdce-a40e38e34d49","Type":"ContainerStarted","Data":"de18e02efd23ec84c46e0a51b0b21f6f738511f24b5c52c3a202d8c1d45cdb31"} Jan 05 23:25:16 crc kubenswrapper[5034]: I0105 23:25:16.739888 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xcgcs" podStartSLOduration=2.739858097 podStartE2EDuration="2.739858097s" podCreationTimestamp="2026-01-05 23:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:25:16.73255575 +0000 UTC m=+5609.104555189" watchObservedRunningTime="2026-01-05 23:25:16.739858097 +0000 UTC m=+5609.111857536" Jan 05 23:25:18 crc kubenswrapper[5034]: I0105 23:25:18.733130 5034 generic.go:334] "Generic (PLEG): container finished" podID="f351d493-b50b-4af9-bdce-a40e38e34d49" containerID="de18e02efd23ec84c46e0a51b0b21f6f738511f24b5c52c3a202d8c1d45cdb31" exitCode=0 Jan 05 23:25:18 crc kubenswrapper[5034]: I0105 23:25:18.733197 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xcgcs" event={"ID":"f351d493-b50b-4af9-bdce-a40e38e34d49","Type":"ContainerDied","Data":"de18e02efd23ec84c46e0a51b0b21f6f738511f24b5c52c3a202d8c1d45cdb31"} Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.072880 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.183745 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-config-data\") pod \"f351d493-b50b-4af9-bdce-a40e38e34d49\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.184002 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-scripts\") pod \"f351d493-b50b-4af9-bdce-a40e38e34d49\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.184103 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-db-sync-config-data\") pod \"f351d493-b50b-4af9-bdce-a40e38e34d49\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.184181 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-combined-ca-bundle\") pod \"f351d493-b50b-4af9-bdce-a40e38e34d49\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.184235 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knttm\" (UniqueName: \"kubernetes.io/projected/f351d493-b50b-4af9-bdce-a40e38e34d49-kube-api-access-knttm\") pod \"f351d493-b50b-4af9-bdce-a40e38e34d49\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.184424 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f351d493-b50b-4af9-bdce-a40e38e34d49-etc-machine-id\") pod \"f351d493-b50b-4af9-bdce-a40e38e34d49\" (UID: \"f351d493-b50b-4af9-bdce-a40e38e34d49\") " Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.184856 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f351d493-b50b-4af9-bdce-a40e38e34d49-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f351d493-b50b-4af9-bdce-a40e38e34d49" (UID: "f351d493-b50b-4af9-bdce-a40e38e34d49"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.184993 5034 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f351d493-b50b-4af9-bdce-a40e38e34d49-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.189983 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f351d493-b50b-4af9-bdce-a40e38e34d49" (UID: "f351d493-b50b-4af9-bdce-a40e38e34d49"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.190739 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-scripts" (OuterVolumeSpecName: "scripts") pod "f351d493-b50b-4af9-bdce-a40e38e34d49" (UID: "f351d493-b50b-4af9-bdce-a40e38e34d49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.190792 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f351d493-b50b-4af9-bdce-a40e38e34d49-kube-api-access-knttm" (OuterVolumeSpecName: "kube-api-access-knttm") pod "f351d493-b50b-4af9-bdce-a40e38e34d49" (UID: "f351d493-b50b-4af9-bdce-a40e38e34d49"). InnerVolumeSpecName "kube-api-access-knttm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.211272 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f351d493-b50b-4af9-bdce-a40e38e34d49" (UID: "f351d493-b50b-4af9-bdce-a40e38e34d49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.229540 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-config-data" (OuterVolumeSpecName: "config-data") pod "f351d493-b50b-4af9-bdce-a40e38e34d49" (UID: "f351d493-b50b-4af9-bdce-a40e38e34d49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.286233 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.286501 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.286570 5034 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.286641 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f351d493-b50b-4af9-bdce-a40e38e34d49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.286697 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knttm\" (UniqueName: \"kubernetes.io/projected/f351d493-b50b-4af9-bdce-a40e38e34d49-kube-api-access-knttm\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.469314 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.469389 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.750499 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xcgcs" event={"ID":"f351d493-b50b-4af9-bdce-a40e38e34d49","Type":"ContainerDied","Data":"d6c71d4d71d9cdb63ec301d83e2c761f8c2d91f94b153a90a1230ff876313fe8"} Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.750539 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xcgcs" Jan 05 23:25:20 crc kubenswrapper[5034]: I0105 23:25:20.750542 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c71d4d71d9cdb63ec301d83e2c761f8c2d91f94b153a90a1230ff876313fe8" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.045764 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795bf77d9c-mrt84"] Jan 05 23:25:21 crc kubenswrapper[5034]: E0105 23:25:21.046183 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f351d493-b50b-4af9-bdce-a40e38e34d49" containerName="cinder-db-sync" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.046198 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f351d493-b50b-4af9-bdce-a40e38e34d49" containerName="cinder-db-sync" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.046378 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f351d493-b50b-4af9-bdce-a40e38e34d49" containerName="cinder-db-sync" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.047349 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.067905 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795bf77d9c-mrt84"] Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.205708 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-nb\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.205793 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-sb\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.206121 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-dns-svc\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.206302 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7hrw\" (UniqueName: \"kubernetes.io/projected/36a04d15-352d-499c-a8e8-3ca3d15dd13b-kube-api-access-f7hrw\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.206560 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-config\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.222594 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.224455 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.230552 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.230833 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.231068 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.231305 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cgcz5" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.235986 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.308019 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-dns-svc\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.308125 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrv7\" (UniqueName: \"kubernetes.io/projected/69ecff1a-93e5-4e00-bba4-1a43f617048c-kube-api-access-vdrv7\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.308165 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7hrw\" (UniqueName: \"kubernetes.io/projected/36a04d15-352d-499c-a8e8-3ca3d15dd13b-kube-api-access-f7hrw\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.308189 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-scripts\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.308235 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.308256 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-config\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.308272 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.308298 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ecff1a-93e5-4e00-bba4-1a43f617048c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.309452 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-dns-svc\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.309660 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-config\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.309769 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-nb\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.309833 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69ecff1a-93e5-4e00-bba4-1a43f617048c-logs\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.309884 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data-custom\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.309917 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-sb\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.310483 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-nb\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.311310 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-sb\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.335109 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7hrw\" (UniqueName: \"kubernetes.io/projected/36a04d15-352d-499c-a8e8-3ca3d15dd13b-kube-api-access-f7hrw\") pod \"dnsmasq-dns-795bf77d9c-mrt84\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.365901 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.412238 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data-custom\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.412617 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdrv7\" (UniqueName: \"kubernetes.io/projected/69ecff1a-93e5-4e00-bba4-1a43f617048c-kube-api-access-vdrv7\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.412645 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-scripts\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.412706 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.412729 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.412755 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ecff1a-93e5-4e00-bba4-1a43f617048c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.412802 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69ecff1a-93e5-4e00-bba4-1a43f617048c-logs\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.413288 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69ecff1a-93e5-4e00-bba4-1a43f617048c-logs\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.413974 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ecff1a-93e5-4e00-bba4-1a43f617048c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.418204 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.418418 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data-custom\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.423013 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-scripts\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.429879 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.448606 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdrv7\" (UniqueName: \"kubernetes.io/projected/69ecff1a-93e5-4e00-bba4-1a43f617048c-kube-api-access-vdrv7\") pod \"cinder-api-0\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.546097 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.630398 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795bf77d9c-mrt84"] Jan 05 23:25:21 crc kubenswrapper[5034]: I0105 23:25:21.765741 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" event={"ID":"36a04d15-352d-499c-a8e8-3ca3d15dd13b","Type":"ContainerStarted","Data":"1f66add5a336ce6a20ca0325e2401c0d55d42c5d2b7a0486ec950af0091feb6c"} Jan 05 23:25:22 crc kubenswrapper[5034]: I0105 23:25:22.032209 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:25:22 crc kubenswrapper[5034]: I0105 23:25:22.789387 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69ecff1a-93e5-4e00-bba4-1a43f617048c","Type":"ContainerStarted","Data":"d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e"} Jan 05 23:25:22 crc kubenswrapper[5034]: I0105 23:25:22.789717 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69ecff1a-93e5-4e00-bba4-1a43f617048c","Type":"ContainerStarted","Data":"3aaf759d637a6f317753f74c5a72bd90d26fd2f57ae78e09b8e037a110ff9bc9"} Jan 05 23:25:22 crc kubenswrapper[5034]: I0105 23:25:22.803261 5034 generic.go:334] "Generic (PLEG): container finished" podID="36a04d15-352d-499c-a8e8-3ca3d15dd13b" containerID="99f0ff6920366930dd6792706572b0fcb6db8d7f0bd7447974b55bd91ecb97d0" exitCode=0 Jan 05 23:25:22 crc kubenswrapper[5034]: I0105 23:25:22.803319 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" event={"ID":"36a04d15-352d-499c-a8e8-3ca3d15dd13b","Type":"ContainerDied","Data":"99f0ff6920366930dd6792706572b0fcb6db8d7f0bd7447974b55bd91ecb97d0"} Jan 05 23:25:23 crc kubenswrapper[5034]: I0105 23:25:23.753462 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:25:23 crc kubenswrapper[5034]: I0105 23:25:23.815383 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69ecff1a-93e5-4e00-bba4-1a43f617048c","Type":"ContainerStarted","Data":"0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083"} Jan 05 23:25:23 crc kubenswrapper[5034]: I0105 23:25:23.815541 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 05 23:25:23 crc kubenswrapper[5034]: I0105 23:25:23.820046 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" event={"ID":"36a04d15-352d-499c-a8e8-3ca3d15dd13b","Type":"ContainerStarted","Data":"1cc2b81dd8a2b6e157ee5960d3cb640d865c44c0cfc16cfb5b09d81b3f27fff6"} Jan 05 23:25:23 crc kubenswrapper[5034]: I0105 23:25:23.820190 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:23 crc kubenswrapper[5034]: I0105 23:25:23.840161 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.838886135 podStartE2EDuration="2.838886135s" podCreationTimestamp="2026-01-05 23:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:25:23.832693049 +0000 UTC m=+5616.204692498" watchObservedRunningTime="2026-01-05 23:25:23.838886135 +0000 UTC m=+5616.210885574" Jan 05 23:25:23 crc kubenswrapper[5034]: I0105 23:25:23.862597 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" podStartSLOduration=2.862575557 podStartE2EDuration="2.862575557s" podCreationTimestamp="2026-01-05 23:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:25:23.859036187 +0000 UTC m=+5616.231035626" watchObservedRunningTime="2026-01-05 23:25:23.862575557 +0000 UTC m=+5616.234574996" Jan 05 23:25:24 crc kubenswrapper[5034]: I0105 23:25:24.831654 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="69ecff1a-93e5-4e00-bba4-1a43f617048c" containerName="cinder-api-log" containerID="cri-o://d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e" gracePeriod=30 Jan 05 23:25:24 crc kubenswrapper[5034]: I0105 23:25:24.832271 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="69ecff1a-93e5-4e00-bba4-1a43f617048c" containerName="cinder-api" containerID="cri-o://0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083" gracePeriod=30 Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.456908 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.492788 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-scripts\") pod \"69ecff1a-93e5-4e00-bba4-1a43f617048c\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.493035 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ecff1a-93e5-4e00-bba4-1a43f617048c-etc-machine-id\") pod \"69ecff1a-93e5-4e00-bba4-1a43f617048c\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.493069 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-combined-ca-bundle\") pod \"69ecff1a-93e5-4e00-bba4-1a43f617048c\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.493176 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdrv7\" (UniqueName: \"kubernetes.io/projected/69ecff1a-93e5-4e00-bba4-1a43f617048c-kube-api-access-vdrv7\") pod \"69ecff1a-93e5-4e00-bba4-1a43f617048c\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.493211 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69ecff1a-93e5-4e00-bba4-1a43f617048c-logs\") pod \"69ecff1a-93e5-4e00-bba4-1a43f617048c\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.493240 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data\") pod \"69ecff1a-93e5-4e00-bba4-1a43f617048c\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.493283 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data-custom\") pod \"69ecff1a-93e5-4e00-bba4-1a43f617048c\" (UID: \"69ecff1a-93e5-4e00-bba4-1a43f617048c\") " Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.494966 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69ecff1a-93e5-4e00-bba4-1a43f617048c-logs" (OuterVolumeSpecName: "logs") pod "69ecff1a-93e5-4e00-bba4-1a43f617048c" (UID: "69ecff1a-93e5-4e00-bba4-1a43f617048c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.500398 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-scripts" (OuterVolumeSpecName: "scripts") pod "69ecff1a-93e5-4e00-bba4-1a43f617048c" (UID: "69ecff1a-93e5-4e00-bba4-1a43f617048c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.500651 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "69ecff1a-93e5-4e00-bba4-1a43f617048c" (UID: "69ecff1a-93e5-4e00-bba4-1a43f617048c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.500696 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ecff1a-93e5-4e00-bba4-1a43f617048c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "69ecff1a-93e5-4e00-bba4-1a43f617048c" (UID: "69ecff1a-93e5-4e00-bba4-1a43f617048c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.505998 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ecff1a-93e5-4e00-bba4-1a43f617048c-kube-api-access-vdrv7" (OuterVolumeSpecName: "kube-api-access-vdrv7") pod "69ecff1a-93e5-4e00-bba4-1a43f617048c" (UID: "69ecff1a-93e5-4e00-bba4-1a43f617048c"). InnerVolumeSpecName "kube-api-access-vdrv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.550102 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69ecff1a-93e5-4e00-bba4-1a43f617048c" (UID: "69ecff1a-93e5-4e00-bba4-1a43f617048c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.556873 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data" (OuterVolumeSpecName: "config-data") pod "69ecff1a-93e5-4e00-bba4-1a43f617048c" (UID: "69ecff1a-93e5-4e00-bba4-1a43f617048c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.595678 5034 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ecff1a-93e5-4e00-bba4-1a43f617048c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.596059 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.596164 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdrv7\" (UniqueName: \"kubernetes.io/projected/69ecff1a-93e5-4e00-bba4-1a43f617048c-kube-api-access-vdrv7\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.596252 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69ecff1a-93e5-4e00-bba4-1a43f617048c-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.596332 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.596408 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.596490 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ecff1a-93e5-4e00-bba4-1a43f617048c-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.841815 5034 generic.go:334] "Generic (PLEG): container finished" podID="69ecff1a-93e5-4e00-bba4-1a43f617048c" containerID="0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083" exitCode=0 Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.842422 5034 generic.go:334] "Generic (PLEG): container finished" podID="69ecff1a-93e5-4e00-bba4-1a43f617048c" containerID="d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e" exitCode=143 Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.841937 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.850613 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69ecff1a-93e5-4e00-bba4-1a43f617048c","Type":"ContainerDied","Data":"0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083"} Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.850656 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69ecff1a-93e5-4e00-bba4-1a43f617048c","Type":"ContainerDied","Data":"d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e"} Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.850667 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69ecff1a-93e5-4e00-bba4-1a43f617048c","Type":"ContainerDied","Data":"3aaf759d637a6f317753f74c5a72bd90d26fd2f57ae78e09b8e037a110ff9bc9"} Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.850693 5034 scope.go:117] "RemoveContainer" containerID="0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.877727 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.887575 5034 scope.go:117] "RemoveContainer" containerID="d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.892195 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.911636 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:25:25 crc kubenswrapper[5034]: E0105 23:25:25.912048 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ecff1a-93e5-4e00-bba4-1a43f617048c" containerName="cinder-api-log" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.912064 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ecff1a-93e5-4e00-bba4-1a43f617048c" containerName="cinder-api-log" Jan 05 23:25:25 crc kubenswrapper[5034]: E0105 23:25:25.912094 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ecff1a-93e5-4e00-bba4-1a43f617048c" containerName="cinder-api" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.912103 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ecff1a-93e5-4e00-bba4-1a43f617048c" containerName="cinder-api" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.912288 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ecff1a-93e5-4e00-bba4-1a43f617048c" containerName="cinder-api" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.912310 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ecff1a-93e5-4e00-bba4-1a43f617048c" containerName="cinder-api-log" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.913353 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.916673 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.917107 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.917374 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.917458 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cgcz5" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.917642 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.922489 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.922906 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.948316 5034 scope.go:117] "RemoveContainer" containerID="0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083" Jan 05 23:25:25 crc kubenswrapper[5034]: E0105 23:25:25.950214 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083\": container with ID starting with 0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083 not found: ID does not exist" containerID="0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.950264 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083"} err="failed to get container status \"0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083\": rpc error: code = NotFound desc = could not find container \"0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083\": container with ID starting with 0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083 not found: ID does not exist" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.950292 5034 scope.go:117] "RemoveContainer" containerID="d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e" Jan 05 23:25:25 crc kubenswrapper[5034]: E0105 23:25:25.950766 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e\": container with ID starting with d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e not found: ID does not exist" containerID="d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.950796 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e"} err="failed to get container status \"d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e\": rpc error: code = NotFound desc = could not find container \"d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e\": container with ID starting with d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e not found: ID does not exist" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.950814 5034 scope.go:117] "RemoveContainer" containerID="0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.951164 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083"} err="failed to get container status \"0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083\": rpc error: code = NotFound desc = could not find container \"0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083\": container with ID starting with 0e532717f5e437a2a79458e0f43efe65d8f09f587b857c342ca072773892c083 not found: ID does not exist" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.951182 5034 scope.go:117] "RemoveContainer" containerID="d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e" Jan 05 23:25:25 crc kubenswrapper[5034]: I0105 23:25:25.951842 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e"} err="failed to get container status \"d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e\": rpc error: code = NotFound desc = could not find container \"d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e\": container with ID starting with d01daa4adf149d9b5eab9c9c1b889181e4438f131f5aed0a2d9d3f50661bba8e not found: ID does not exist" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.004526 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.004608 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.004824 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb0469a5-32e5-4454-be32-d3dda84a0c0b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.004910 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.005368 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0469a5-32e5-4454-be32-d3dda84a0c0b-logs\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.005439 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-scripts\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.005513 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.005587 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.005654 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnn2g\" (UniqueName: \"kubernetes.io/projected/fb0469a5-32e5-4454-be32-d3dda84a0c0b-kube-api-access-lnn2g\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.107771 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnn2g\" (UniqueName: \"kubernetes.io/projected/fb0469a5-32e5-4454-be32-d3dda84a0c0b-kube-api-access-lnn2g\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.107850 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.107884 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.107922 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb0469a5-32e5-4454-be32-d3dda84a0c0b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.107965 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.108020 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0469a5-32e5-4454-be32-d3dda84a0c0b-logs\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.108042 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-scripts\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.108070 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.108117 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.109241 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb0469a5-32e5-4454-be32-d3dda84a0c0b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.109728 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0469a5-32e5-4454-be32-d3dda84a0c0b-logs\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.115053 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.115069 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.115665 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.116733 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.117018 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-scripts\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.118625 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.128953 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnn2g\" (UniqueName: \"kubernetes.io/projected/fb0469a5-32e5-4454-be32-d3dda84a0c0b-kube-api-access-lnn2g\") pod \"cinder-api-0\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.236810 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.682581 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:25:26 crc kubenswrapper[5034]: W0105 23:25:26.687398 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb0469a5_32e5_4454_be32_d3dda84a0c0b.slice/crio-9b2b9c01da5f94884c5f6b8f4f944f333c0a7ce4690a7af002fcea35da7ffe5f WatchSource:0}: Error finding container 9b2b9c01da5f94884c5f6b8f4f944f333c0a7ce4690a7af002fcea35da7ffe5f: Status 404 returned error can't find the container with id 9b2b9c01da5f94884c5f6b8f4f944f333c0a7ce4690a7af002fcea35da7ffe5f Jan 05 23:25:26 crc kubenswrapper[5034]: I0105 23:25:26.853398 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0469a5-32e5-4454-be32-d3dda84a0c0b","Type":"ContainerStarted","Data":"9b2b9c01da5f94884c5f6b8f4f944f333c0a7ce4690a7af002fcea35da7ffe5f"} Jan 05 23:25:27 crc kubenswrapper[5034]: I0105 23:25:27.854003 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69ecff1a-93e5-4e00-bba4-1a43f617048c" path="/var/lib/kubelet/pods/69ecff1a-93e5-4e00-bba4-1a43f617048c/volumes" Jan 05 23:25:27 crc kubenswrapper[5034]: I0105 23:25:27.866499 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0469a5-32e5-4454-be32-d3dda84a0c0b","Type":"ContainerStarted","Data":"5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88"} Jan 05 23:25:27 crc kubenswrapper[5034]: I0105 23:25:27.866795 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 05 23:25:27 crc kubenswrapper[5034]: I0105 23:25:27.866897 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0469a5-32e5-4454-be32-d3dda84a0c0b","Type":"ContainerStarted","Data":"0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9"} Jan 05 23:25:27 crc kubenswrapper[5034]: I0105 23:25:27.892443 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.8924156610000002 podStartE2EDuration="2.892415661s" podCreationTimestamp="2026-01-05 23:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:25:27.884206268 +0000 UTC m=+5620.256205707" watchObservedRunningTime="2026-01-05 23:25:27.892415661 +0000 UTC m=+5620.264415100" Jan 05 23:25:31 crc kubenswrapper[5034]: I0105 23:25:31.368766 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:25:31 crc kubenswrapper[5034]: I0105 23:25:31.430014 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848dfd49-ppw2r"] Jan 05 23:25:31 crc kubenswrapper[5034]: I0105 23:25:31.430352 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" podUID="fdf89f3b-148b-4991-aee0-43c4b9494f5d" containerName="dnsmasq-dns" containerID="cri-o://b515dec3464a80c6ec438466007b6c57bad90a36cda0274109ce423e39e01b24" gracePeriod=10 Jan 05 23:25:31 crc kubenswrapper[5034]: I0105 23:25:31.900848 5034 generic.go:334] "Generic (PLEG): container finished" podID="fdf89f3b-148b-4991-aee0-43c4b9494f5d" containerID="b515dec3464a80c6ec438466007b6c57bad90a36cda0274109ce423e39e01b24" exitCode=0 Jan 05 23:25:31 crc kubenswrapper[5034]: I0105 23:25:31.900904 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" event={"ID":"fdf89f3b-148b-4991-aee0-43c4b9494f5d","Type":"ContainerDied","Data":"b515dec3464a80c6ec438466007b6c57bad90a36cda0274109ce423e39e01b24"} Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.458776 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.533056 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnxd2\" (UniqueName: \"kubernetes.io/projected/fdf89f3b-148b-4991-aee0-43c4b9494f5d-kube-api-access-pnxd2\") pod \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.533183 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-nb\") pod \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.533241 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-config\") pod \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.533276 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-sb\") pod \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.533397 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-dns-svc\") pod \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\" (UID: \"fdf89f3b-148b-4991-aee0-43c4b9494f5d\") " Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.540440 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf89f3b-148b-4991-aee0-43c4b9494f5d-kube-api-access-pnxd2" (OuterVolumeSpecName: "kube-api-access-pnxd2") pod "fdf89f3b-148b-4991-aee0-43c4b9494f5d" (UID: "fdf89f3b-148b-4991-aee0-43c4b9494f5d"). InnerVolumeSpecName "kube-api-access-pnxd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.585651 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdf89f3b-148b-4991-aee0-43c4b9494f5d" (UID: "fdf89f3b-148b-4991-aee0-43c4b9494f5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.590354 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdf89f3b-148b-4991-aee0-43c4b9494f5d" (UID: "fdf89f3b-148b-4991-aee0-43c4b9494f5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.592264 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-config" (OuterVolumeSpecName: "config") pod "fdf89f3b-148b-4991-aee0-43c4b9494f5d" (UID: "fdf89f3b-148b-4991-aee0-43c4b9494f5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.593842 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdf89f3b-148b-4991-aee0-43c4b9494f5d" (UID: "fdf89f3b-148b-4991-aee0-43c4b9494f5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.635941 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnxd2\" (UniqueName: \"kubernetes.io/projected/fdf89f3b-148b-4991-aee0-43c4b9494f5d-kube-api-access-pnxd2\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.635983 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.635994 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.636007 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.636016 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf89f3b-148b-4991-aee0-43c4b9494f5d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.911028 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" event={"ID":"fdf89f3b-148b-4991-aee0-43c4b9494f5d","Type":"ContainerDied","Data":"7aef6b5dd9ef19d780f68a3ffe257ed02f680617da6fa31e86f87cb98973c76d"} Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.911872 5034 scope.go:117] "RemoveContainer" containerID="b515dec3464a80c6ec438466007b6c57bad90a36cda0274109ce423e39e01b24" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.912236 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848dfd49-ppw2r" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.953064 5034 scope.go:117] "RemoveContainer" containerID="5e3837c376fce68be89cde05807c18192936559ed853c5f195b032db9fcc76d3" Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.979205 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848dfd49-ppw2r"] Jan 05 23:25:32 crc kubenswrapper[5034]: I0105 23:25:32.986856 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848dfd49-ppw2r"] Jan 05 23:25:33 crc kubenswrapper[5034]: I0105 23:25:33.856350 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf89f3b-148b-4991-aee0-43c4b9494f5d" path="/var/lib/kubelet/pods/fdf89f3b-148b-4991-aee0-43c4b9494f5d/volumes" Jan 05 23:25:38 crc kubenswrapper[5034]: I0105 23:25:38.166964 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 05 23:25:50 crc kubenswrapper[5034]: I0105 23:25:50.468776 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:25:50 crc kubenswrapper[5034]: I0105 23:25:50.469448 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:25:50 crc kubenswrapper[5034]: I0105 23:25:50.469498 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 23:25:50 crc kubenswrapper[5034]: I0105 23:25:50.470370 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 23:25:50 crc kubenswrapper[5034]: I0105 23:25:50.470417 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" gracePeriod=600 Jan 05 23:25:50 crc kubenswrapper[5034]: E0105 23:25:50.609948 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:25:51 crc kubenswrapper[5034]: I0105 23:25:51.104290 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" exitCode=0 Jan 05 23:25:51 crc kubenswrapper[5034]: I0105 23:25:51.105056 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef"} Jan 05 23:25:51 crc kubenswrapper[5034]: I0105 23:25:51.105426 5034 scope.go:117] "RemoveContainer" containerID="f74e12195f4a4167262b994444528ef434816fe2ad847b01f18b8380f45c84a2" Jan 05 23:25:51 crc kubenswrapper[5034]: I0105 23:25:51.106664 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:25:51 crc kubenswrapper[5034]: E0105 23:25:51.107224 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:25:55 crc kubenswrapper[5034]: I0105 23:25:55.951987 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 23:25:55 crc kubenswrapper[5034]: E0105 23:25:55.952887 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf89f3b-148b-4991-aee0-43c4b9494f5d" containerName="init" Jan 05 23:25:55 crc kubenswrapper[5034]: I0105 23:25:55.952902 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf89f3b-148b-4991-aee0-43c4b9494f5d" containerName="init" Jan 05 23:25:55 crc kubenswrapper[5034]: E0105 23:25:55.952915 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf89f3b-148b-4991-aee0-43c4b9494f5d" containerName="dnsmasq-dns" Jan 05 23:25:55 crc kubenswrapper[5034]: I0105 23:25:55.952921 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf89f3b-148b-4991-aee0-43c4b9494f5d" containerName="dnsmasq-dns" Jan 05 23:25:55 crc kubenswrapper[5034]: I0105 23:25:55.953119 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf89f3b-148b-4991-aee0-43c4b9494f5d" containerName="dnsmasq-dns" Jan 05 23:25:55 crc kubenswrapper[5034]: I0105 23:25:55.954013 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 23:25:55 crc kubenswrapper[5034]: I0105 23:25:55.956391 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 05 23:25:55 crc kubenswrapper[5034]: I0105 23:25:55.967674 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.127631 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.127736 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-scripts\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.127783 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.127807 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.127835 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86tq2\" (UniqueName: \"kubernetes.io/projected/0124707d-4355-4fe1-8386-2476e8191501-kube-api-access-86tq2\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.127866 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0124707d-4355-4fe1-8386-2476e8191501-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.229593 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.229732 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-scripts\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.229792 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.229832 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.229869 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86tq2\" (UniqueName: \"kubernetes.io/projected/0124707d-4355-4fe1-8386-2476e8191501-kube-api-access-86tq2\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.229920 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0124707d-4355-4fe1-8386-2476e8191501-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.230123 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0124707d-4355-4fe1-8386-2476e8191501-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.236793 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.249686 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.249756 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-scripts\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.251925 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.261828 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86tq2\" (UniqueName: \"kubernetes.io/projected/0124707d-4355-4fe1-8386-2476e8191501-kube-api-access-86tq2\") pod \"cinder-scheduler-0\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.273537 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 23:25:56 crc kubenswrapper[5034]: I0105 23:25:56.839810 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 23:25:57 crc kubenswrapper[5034]: I0105 23:25:57.158218 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0124707d-4355-4fe1-8386-2476e8191501","Type":"ContainerStarted","Data":"350a79a5e9e24d433dc6185abe4361ff4d1eed7d51bfbff60632be5ec0d07c25"} Jan 05 23:25:57 crc kubenswrapper[5034]: I0105 23:25:57.439751 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:25:57 crc kubenswrapper[5034]: I0105 23:25:57.440174 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" containerName="cinder-api-log" containerID="cri-o://0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9" gracePeriod=30 Jan 05 23:25:57 crc kubenswrapper[5034]: I0105 23:25:57.440274 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" containerName="cinder-api" containerID="cri-o://5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88" gracePeriod=30 Jan 05 23:25:58 crc kubenswrapper[5034]: I0105 23:25:58.169574 5034 generic.go:334] "Generic (PLEG): container finished" podID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" containerID="0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9" exitCode=143 Jan 05 23:25:58 crc kubenswrapper[5034]: I0105 23:25:58.169690 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0469a5-32e5-4454-be32-d3dda84a0c0b","Type":"ContainerDied","Data":"0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9"} Jan 05 23:25:58 crc kubenswrapper[5034]: I0105 23:25:58.172179 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0124707d-4355-4fe1-8386-2476e8191501","Type":"ContainerStarted","Data":"6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174"} Jan 05 23:25:58 crc kubenswrapper[5034]: I0105 23:25:58.172226 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0124707d-4355-4fe1-8386-2476e8191501","Type":"ContainerStarted","Data":"043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be"} Jan 05 23:25:58 crc kubenswrapper[5034]: I0105 23:25:58.201589 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.201558798 podStartE2EDuration="3.201558798s" podCreationTimestamp="2026-01-05 23:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:25:58.192567932 +0000 UTC m=+5650.564567371" watchObservedRunningTime="2026-01-05 23:25:58.201558798 +0000 UTC m=+5650.573558237" Jan 05 23:25:59 crc kubenswrapper[5034]: I0105 23:25:59.598319 5034 scope.go:117] "RemoveContainer" containerID="c4531d68dfa7d1f7116ac9c80d7a3f6c9fb7483135410687704c708b3a69b7b3" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.035343 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.157104 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data-custom\") pod \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.157429 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-scripts\") pod \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.157543 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data\") pod \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.157578 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-combined-ca-bundle\") pod \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.157628 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0469a5-32e5-4454-be32-d3dda84a0c0b-logs\") pod \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.157648 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnn2g\" (UniqueName: \"kubernetes.io/projected/fb0469a5-32e5-4454-be32-d3dda84a0c0b-kube-api-access-lnn2g\") pod \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.157708 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-internal-tls-certs\") pod \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.157739 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-public-tls-certs\") pod \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.157769 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb0469a5-32e5-4454-be32-d3dda84a0c0b-etc-machine-id\") pod \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\" (UID: \"fb0469a5-32e5-4454-be32-d3dda84a0c0b\") " Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.158261 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0469a5-32e5-4454-be32-d3dda84a0c0b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fb0469a5-32e5-4454-be32-d3dda84a0c0b" (UID: "fb0469a5-32e5-4454-be32-d3dda84a0c0b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.158988 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb0469a5-32e5-4454-be32-d3dda84a0c0b-logs" (OuterVolumeSpecName: "logs") pod "fb0469a5-32e5-4454-be32-d3dda84a0c0b" (UID: "fb0469a5-32e5-4454-be32-d3dda84a0c0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.163549 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-scripts" (OuterVolumeSpecName: "scripts") pod "fb0469a5-32e5-4454-be32-d3dda84a0c0b" (UID: "fb0469a5-32e5-4454-be32-d3dda84a0c0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.179873 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0469a5-32e5-4454-be32-d3dda84a0c0b-kube-api-access-lnn2g" (OuterVolumeSpecName: "kube-api-access-lnn2g") pod "fb0469a5-32e5-4454-be32-d3dda84a0c0b" (UID: "fb0469a5-32e5-4454-be32-d3dda84a0c0b"). InnerVolumeSpecName "kube-api-access-lnn2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.180060 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fb0469a5-32e5-4454-be32-d3dda84a0c0b" (UID: "fb0469a5-32e5-4454-be32-d3dda84a0c0b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.194822 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb0469a5-32e5-4454-be32-d3dda84a0c0b" (UID: "fb0469a5-32e5-4454-be32-d3dda84a0c0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.212294 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fb0469a5-32e5-4454-be32-d3dda84a0c0b" (UID: "fb0469a5-32e5-4454-be32-d3dda84a0c0b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.217010 5034 generic.go:334] "Generic (PLEG): container finished" podID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" containerID="5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88" exitCode=0 Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.217069 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.217134 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0469a5-32e5-4454-be32-d3dda84a0c0b","Type":"ContainerDied","Data":"5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88"} Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.217217 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0469a5-32e5-4454-be32-d3dda84a0c0b","Type":"ContainerDied","Data":"9b2b9c01da5f94884c5f6b8f4f944f333c0a7ce4690a7af002fcea35da7ffe5f"} Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.217251 5034 scope.go:117] "RemoveContainer" containerID="5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.224070 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fb0469a5-32e5-4454-be32-d3dda84a0c0b" (UID: "fb0469a5-32e5-4454-be32-d3dda84a0c0b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.225683 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data" (OuterVolumeSpecName: "config-data") pod "fb0469a5-32e5-4454-be32-d3dda84a0c0b" (UID: "fb0469a5-32e5-4454-be32-d3dda84a0c0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.260239 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.260276 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.260937 5034 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb0469a5-32e5-4454-be32-d3dda84a0c0b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.260953 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.260966 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.260976 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.260984 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0469a5-32e5-4454-be32-d3dda84a0c0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.260992 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0469a5-32e5-4454-be32-d3dda84a0c0b-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.261024 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnn2g\" (UniqueName: \"kubernetes.io/projected/fb0469a5-32e5-4454-be32-d3dda84a0c0b-kube-api-access-lnn2g\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.274935 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.290589 5034 scope.go:117] "RemoveContainer" containerID="0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.316873 5034 scope.go:117] "RemoveContainer" containerID="5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88" Jan 05 23:26:01 crc kubenswrapper[5034]: E0105 23:26:01.317558 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88\": container with ID starting with 5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88 not found: ID does not exist" containerID="5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.317616 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88"} err="failed to get container status \"5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88\": rpc error: code = NotFound desc = could not find container \"5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88\": container with ID starting with 5ec7ef60528ba76ee7fdcd73a98ad1868c0bc856f7eb632dccb8741298d4ab88 not found: ID does not exist" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.317656 5034 scope.go:117] "RemoveContainer" containerID="0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9" Jan 05 23:26:01 crc kubenswrapper[5034]: E0105 23:26:01.318688 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9\": container with ID starting with 0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9 not found: ID does not exist" containerID="0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.318717 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9"} err="failed to get container status \"0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9\": rpc error: code = NotFound desc = could not find container \"0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9\": container with ID starting with 0f9d0c3e219bdfbbe8fa64c0365c3f99430ed81896509e5cca8c03f4a95a42e9 not found: ID does not exist" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.550366 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.557429 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.582206 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:26:01 crc kubenswrapper[5034]: E0105 23:26:01.582666 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" containerName="cinder-api-log" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.582687 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" containerName="cinder-api-log" Jan 05 23:26:01 crc kubenswrapper[5034]: E0105 23:26:01.582705 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" containerName="cinder-api" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.582711 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" containerName="cinder-api" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.582887 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" containerName="cinder-api-log" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.582908 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" containerName="cinder-api" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.583886 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.585961 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.586270 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.587617 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.589702 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.670666 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-config-data-custom\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.670723 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.670761 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px9hp\" (UniqueName: \"kubernetes.io/projected/01b11e57-4451-4a92-8fef-35e0026b1fad-kube-api-access-px9hp\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.670789 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01b11e57-4451-4a92-8fef-35e0026b1fad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.670932 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-public-tls-certs\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.670972 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b11e57-4451-4a92-8fef-35e0026b1fad-logs\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.670993 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-config-data\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.671185 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-scripts\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.671248 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.773675 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-config-data-custom\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.773754 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.773824 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px9hp\" (UniqueName: \"kubernetes.io/projected/01b11e57-4451-4a92-8fef-35e0026b1fad-kube-api-access-px9hp\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.773877 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01b11e57-4451-4a92-8fef-35e0026b1fad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.773927 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-public-tls-certs\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.774033 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b11e57-4451-4a92-8fef-35e0026b1fad-logs\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.774062 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01b11e57-4451-4a92-8fef-35e0026b1fad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.774177 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-config-data\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.774472 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b11e57-4451-4a92-8fef-35e0026b1fad-logs\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.775059 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-scripts\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.775241 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.779202 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-public-tls-certs\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.779746 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-config-data\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.780747 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.826916 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-config-data-custom\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.827549 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.828427 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b11e57-4451-4a92-8fef-35e0026b1fad-scripts\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.829629 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px9hp\" (UniqueName: \"kubernetes.io/projected/01b11e57-4451-4a92-8fef-35e0026b1fad-kube-api-access-px9hp\") pod \"cinder-api-0\" (UID: \"01b11e57-4451-4a92-8fef-35e0026b1fad\") " pod="openstack/cinder-api-0" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.851082 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb0469a5-32e5-4454-be32-d3dda84a0c0b" path="/var/lib/kubelet/pods/fb0469a5-32e5-4454-be32-d3dda84a0c0b/volumes" Jan 05 23:26:01 crc kubenswrapper[5034]: I0105 23:26:01.899288 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 23:26:02 crc kubenswrapper[5034]: I0105 23:26:02.873749 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 23:26:02 crc kubenswrapper[5034]: W0105 23:26:02.886273 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01b11e57_4451_4a92_8fef_35e0026b1fad.slice/crio-eff65a52aa9a37d4901f7c2a56b3b3911b1592d5a2ba681650d255e4a87bf671 WatchSource:0}: Error finding container eff65a52aa9a37d4901f7c2a56b3b3911b1592d5a2ba681650d255e4a87bf671: Status 404 returned error can't find the container with id eff65a52aa9a37d4901f7c2a56b3b3911b1592d5a2ba681650d255e4a87bf671 Jan 05 23:26:03 crc kubenswrapper[5034]: I0105 23:26:03.237763 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01b11e57-4451-4a92-8fef-35e0026b1fad","Type":"ContainerStarted","Data":"eff65a52aa9a37d4901f7c2a56b3b3911b1592d5a2ba681650d255e4a87bf671"} Jan 05 23:26:03 crc kubenswrapper[5034]: I0105 23:26:03.839983 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:26:03 crc kubenswrapper[5034]: E0105 23:26:03.845941 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:26:04 crc kubenswrapper[5034]: I0105 23:26:04.249693 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01b11e57-4451-4a92-8fef-35e0026b1fad","Type":"ContainerStarted","Data":"c03101b81aa9988cd51ed9b8857364f25185efd6a29e7ce097e2d71dccead444"} Jan 05 23:26:04 crc kubenswrapper[5034]: I0105 23:26:04.250961 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01b11e57-4451-4a92-8fef-35e0026b1fad","Type":"ContainerStarted","Data":"4a55694331e9a04ac2d1daab7e7d6b58e182cca61a636901e8c43cfc7f660499"} Jan 05 23:26:04 crc kubenswrapper[5034]: I0105 23:26:04.250990 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 05 23:26:04 crc kubenswrapper[5034]: I0105 23:26:04.273588 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.273562888 podStartE2EDuration="3.273562888s" podCreationTimestamp="2026-01-05 23:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:26:04.265258582 +0000 UTC m=+5656.637258021" watchObservedRunningTime="2026-01-05 23:26:04.273562888 +0000 UTC m=+5656.645562337" Jan 05 23:26:06 crc kubenswrapper[5034]: I0105 23:26:06.487476 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 05 23:26:06 crc kubenswrapper[5034]: I0105 23:26:06.547941 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 23:26:07 crc kubenswrapper[5034]: I0105 23:26:07.273941 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0124707d-4355-4fe1-8386-2476e8191501" containerName="cinder-scheduler" containerID="cri-o://043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be" gracePeriod=30 Jan 05 23:26:07 crc kubenswrapper[5034]: I0105 23:26:07.274005 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0124707d-4355-4fe1-8386-2476e8191501" containerName="probe" containerID="cri-o://6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174" gracePeriod=30 Jan 05 23:26:08 crc kubenswrapper[5034]: I0105 23:26:08.312228 5034 generic.go:334] "Generic (PLEG): container finished" podID="0124707d-4355-4fe1-8386-2476e8191501" containerID="6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174" exitCode=0 Jan 05 23:26:08 crc kubenswrapper[5034]: I0105 23:26:08.312278 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0124707d-4355-4fe1-8386-2476e8191501","Type":"ContainerDied","Data":"6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174"} Jan 05 23:26:08 crc kubenswrapper[5034]: I0105 23:26:08.977107 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.122996 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-scripts\") pod \"0124707d-4355-4fe1-8386-2476e8191501\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.123070 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0124707d-4355-4fe1-8386-2476e8191501-etc-machine-id\") pod \"0124707d-4355-4fe1-8386-2476e8191501\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.123195 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0124707d-4355-4fe1-8386-2476e8191501-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0124707d-4355-4fe1-8386-2476e8191501" (UID: "0124707d-4355-4fe1-8386-2476e8191501"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.123250 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86tq2\" (UniqueName: \"kubernetes.io/projected/0124707d-4355-4fe1-8386-2476e8191501-kube-api-access-86tq2\") pod \"0124707d-4355-4fe1-8386-2476e8191501\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.123448 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data\") pod \"0124707d-4355-4fe1-8386-2476e8191501\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.123486 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-combined-ca-bundle\") pod \"0124707d-4355-4fe1-8386-2476e8191501\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.123533 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data-custom\") pod \"0124707d-4355-4fe1-8386-2476e8191501\" (UID: \"0124707d-4355-4fe1-8386-2476e8191501\") " Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.124010 5034 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0124707d-4355-4fe1-8386-2476e8191501-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.130777 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0124707d-4355-4fe1-8386-2476e8191501-kube-api-access-86tq2" (OuterVolumeSpecName: "kube-api-access-86tq2") pod "0124707d-4355-4fe1-8386-2476e8191501" (UID: "0124707d-4355-4fe1-8386-2476e8191501"). InnerVolumeSpecName "kube-api-access-86tq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.132924 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0124707d-4355-4fe1-8386-2476e8191501" (UID: "0124707d-4355-4fe1-8386-2476e8191501"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.146166 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-scripts" (OuterVolumeSpecName: "scripts") pod "0124707d-4355-4fe1-8386-2476e8191501" (UID: "0124707d-4355-4fe1-8386-2476e8191501"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.186185 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0124707d-4355-4fe1-8386-2476e8191501" (UID: "0124707d-4355-4fe1-8386-2476e8191501"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.226504 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.226558 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.226572 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.226582 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86tq2\" (UniqueName: \"kubernetes.io/projected/0124707d-4355-4fe1-8386-2476e8191501-kube-api-access-86tq2\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.234062 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data" (OuterVolumeSpecName: "config-data") pod "0124707d-4355-4fe1-8386-2476e8191501" (UID: "0124707d-4355-4fe1-8386-2476e8191501"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.327053 5034 generic.go:334] "Generic (PLEG): container finished" podID="0124707d-4355-4fe1-8386-2476e8191501" containerID="043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be" exitCode=0 Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.327169 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.327198 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0124707d-4355-4fe1-8386-2476e8191501","Type":"ContainerDied","Data":"043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be"} Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.327833 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0124707d-4355-4fe1-8386-2476e8191501-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.328385 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0124707d-4355-4fe1-8386-2476e8191501","Type":"ContainerDied","Data":"350a79a5e9e24d433dc6185abe4361ff4d1eed7d51bfbff60632be5ec0d07c25"} Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.328457 5034 scope.go:117] "RemoveContainer" containerID="6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.374819 5034 scope.go:117] "RemoveContainer" containerID="043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.386446 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.398033 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.410221 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.410377 5034 scope.go:117] "RemoveContainer" containerID="6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174" Jan 05 23:26:09 crc kubenswrapper[5034]: E0105 23:26:09.410859 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0124707d-4355-4fe1-8386-2476e8191501" containerName="cinder-scheduler" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.410887 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0124707d-4355-4fe1-8386-2476e8191501" containerName="cinder-scheduler" Jan 05 23:26:09 crc kubenswrapper[5034]: E0105 23:26:09.410908 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0124707d-4355-4fe1-8386-2476e8191501" containerName="probe" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.410917 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0124707d-4355-4fe1-8386-2476e8191501" containerName="probe" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.411157 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0124707d-4355-4fe1-8386-2476e8191501" containerName="cinder-scheduler" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.411182 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0124707d-4355-4fe1-8386-2476e8191501" containerName="probe" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.412367 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: E0105 23:26:09.412880 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174\": container with ID starting with 6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174 not found: ID does not exist" containerID="6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.412927 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174"} err="failed to get container status \"6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174\": rpc error: code = NotFound desc = could not find container \"6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174\": container with ID starting with 6df058386805eaf4695385f51b0c5c47530cb0be5399ab1eaa43dfe07a392174 not found: ID does not exist" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.412950 5034 scope.go:117] "RemoveContainer" containerID="043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be" Jan 05 23:26:09 crc kubenswrapper[5034]: E0105 23:26:09.413689 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be\": container with ID starting with 043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be not found: ID does not exist" containerID="043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.413723 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be"} err="failed to get container status \"043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be\": rpc error: code = NotFound desc = could not find container \"043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be\": container with ID starting with 043942bb6f5fd7a361b68ec18108a2cfb46e87dd1bd7c26c1f927165ee2634be not found: ID does not exist" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.417617 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.419407 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.532958 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/080915f2-140d-4a59-9621-2677bb674ed6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.533061 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.533216 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.533266 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-config-data\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.533600 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhjm\" (UniqueName: \"kubernetes.io/projected/080915f2-140d-4a59-9621-2677bb674ed6-kube-api-access-wdhjm\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.533708 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-scripts\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.635338 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdhjm\" (UniqueName: \"kubernetes.io/projected/080915f2-140d-4a59-9621-2677bb674ed6-kube-api-access-wdhjm\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.635404 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-scripts\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.635477 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/080915f2-140d-4a59-9621-2677bb674ed6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.635528 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.635552 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.635567 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-config-data\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.636219 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/080915f2-140d-4a59-9621-2677bb674ed6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.639904 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-config-data\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.641642 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.649428 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.649844 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/080915f2-140d-4a59-9621-2677bb674ed6-scripts\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.654293 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdhjm\" (UniqueName: \"kubernetes.io/projected/080915f2-140d-4a59-9621-2677bb674ed6-kube-api-access-wdhjm\") pod \"cinder-scheduler-0\" (UID: \"080915f2-140d-4a59-9621-2677bb674ed6\") " pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.735796 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 23:26:09 crc kubenswrapper[5034]: I0105 23:26:09.850392 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0124707d-4355-4fe1-8386-2476e8191501" path="/var/lib/kubelet/pods/0124707d-4355-4fe1-8386-2476e8191501/volumes" Jan 05 23:26:10 crc kubenswrapper[5034]: I0105 23:26:10.186681 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 23:26:10 crc kubenswrapper[5034]: I0105 23:26:10.340706 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"080915f2-140d-4a59-9621-2677bb674ed6","Type":"ContainerStarted","Data":"116e4b428c3d590a80ec19e8878d51d96f784f9d1d8afc1da1f73ce9e718c02d"} Jan 05 23:26:11 crc kubenswrapper[5034]: I0105 23:26:11.349750 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"080915f2-140d-4a59-9621-2677bb674ed6","Type":"ContainerStarted","Data":"67b7ae0cee5c20556e0fddeec2efe3df7f37d684df10cf25a6ca604c8195255a"} Jan 05 23:26:11 crc kubenswrapper[5034]: I0105 23:26:11.350336 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"080915f2-140d-4a59-9621-2677bb674ed6","Type":"ContainerStarted","Data":"33a3ca1107c0812090c3c8cde695f12ef8a4b5c9137a5a51865e2cdf9f792b51"} Jan 05 23:26:11 crc kubenswrapper[5034]: I0105 23:26:11.373172 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.37314356 podStartE2EDuration="2.37314356s" podCreationTimestamp="2026-01-05 23:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:26:11.370666009 +0000 UTC m=+5663.742665448" watchObservedRunningTime="2026-01-05 23:26:11.37314356 +0000 UTC m=+5663.745142999" Jan 05 23:26:13 crc kubenswrapper[5034]: I0105 23:26:13.917155 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 05 23:26:14 crc kubenswrapper[5034]: I0105 23:26:14.736136 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 05 23:26:18 crc kubenswrapper[5034]: I0105 23:26:18.838756 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:26:18 crc kubenswrapper[5034]: E0105 23:26:18.839523 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:26:20 crc kubenswrapper[5034]: I0105 23:26:20.026026 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.164056 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-k4s29"] Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.165566 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-k4s29" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.190304 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-k4s29"] Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.247741 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6ln\" (UniqueName: \"kubernetes.io/projected/05bd4ed8-7e38-4469-b295-f475bf906342-kube-api-access-pm6ln\") pod \"glance-db-create-k4s29\" (UID: \"05bd4ed8-7e38-4469-b295-f475bf906342\") " pod="openstack/glance-db-create-k4s29" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.247804 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05bd4ed8-7e38-4469-b295-f475bf906342-operator-scripts\") pod \"glance-db-create-k4s29\" (UID: \"05bd4ed8-7e38-4469-b295-f475bf906342\") " pod="openstack/glance-db-create-k4s29" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.285095 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6aa4-account-create-update-zfzvs"] Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.286315 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6aa4-account-create-update-zfzvs" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.290708 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.294823 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6aa4-account-create-update-zfzvs"] Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.350519 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm6ln\" (UniqueName: \"kubernetes.io/projected/05bd4ed8-7e38-4469-b295-f475bf906342-kube-api-access-pm6ln\") pod \"glance-db-create-k4s29\" (UID: \"05bd4ed8-7e38-4469-b295-f475bf906342\") " pod="openstack/glance-db-create-k4s29" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.350592 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05bd4ed8-7e38-4469-b295-f475bf906342-operator-scripts\") pod \"glance-db-create-k4s29\" (UID: \"05bd4ed8-7e38-4469-b295-f475bf906342\") " pod="openstack/glance-db-create-k4s29" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.352457 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05bd4ed8-7e38-4469-b295-f475bf906342-operator-scripts\") pod \"glance-db-create-k4s29\" (UID: \"05bd4ed8-7e38-4469-b295-f475bf906342\") " pod="openstack/glance-db-create-k4s29" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.377708 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm6ln\" (UniqueName: \"kubernetes.io/projected/05bd4ed8-7e38-4469-b295-f475bf906342-kube-api-access-pm6ln\") pod \"glance-db-create-k4s29\" (UID: \"05bd4ed8-7e38-4469-b295-f475bf906342\") " pod="openstack/glance-db-create-k4s29" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.452718 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmqm9\" (UniqueName: \"kubernetes.io/projected/1f6a9cba-014e-4c12-8c00-114299efe216-kube-api-access-zmqm9\") pod \"glance-6aa4-account-create-update-zfzvs\" (UID: \"1f6a9cba-014e-4c12-8c00-114299efe216\") " pod="openstack/glance-6aa4-account-create-update-zfzvs" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.452831 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f6a9cba-014e-4c12-8c00-114299efe216-operator-scripts\") pod \"glance-6aa4-account-create-update-zfzvs\" (UID: \"1f6a9cba-014e-4c12-8c00-114299efe216\") " pod="openstack/glance-6aa4-account-create-update-zfzvs" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.532438 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-k4s29" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.554793 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmqm9\" (UniqueName: \"kubernetes.io/projected/1f6a9cba-014e-4c12-8c00-114299efe216-kube-api-access-zmqm9\") pod \"glance-6aa4-account-create-update-zfzvs\" (UID: \"1f6a9cba-014e-4c12-8c00-114299efe216\") " pod="openstack/glance-6aa4-account-create-update-zfzvs" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.554877 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f6a9cba-014e-4c12-8c00-114299efe216-operator-scripts\") pod \"glance-6aa4-account-create-update-zfzvs\" (UID: \"1f6a9cba-014e-4c12-8c00-114299efe216\") " pod="openstack/glance-6aa4-account-create-update-zfzvs" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.555817 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f6a9cba-014e-4c12-8c00-114299efe216-operator-scripts\") pod \"glance-6aa4-account-create-update-zfzvs\" (UID: \"1f6a9cba-014e-4c12-8c00-114299efe216\") " pod="openstack/glance-6aa4-account-create-update-zfzvs" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.580219 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmqm9\" (UniqueName: \"kubernetes.io/projected/1f6a9cba-014e-4c12-8c00-114299efe216-kube-api-access-zmqm9\") pod \"glance-6aa4-account-create-update-zfzvs\" (UID: \"1f6a9cba-014e-4c12-8c00-114299efe216\") " pod="openstack/glance-6aa4-account-create-update-zfzvs" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.600226 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6aa4-account-create-update-zfzvs" Jan 05 23:26:23 crc kubenswrapper[5034]: I0105 23:26:23.994258 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-k4s29"] Jan 05 23:26:24 crc kubenswrapper[5034]: I0105 23:26:24.113727 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6aa4-account-create-update-zfzvs"] Jan 05 23:26:24 crc kubenswrapper[5034]: I0105 23:26:24.503012 5034 generic.go:334] "Generic (PLEG): container finished" podID="1f6a9cba-014e-4c12-8c00-114299efe216" containerID="543753a3b743f25eb204cf68013c2f230dd0958351bb3b097bce31fe1913718e" exitCode=0 Jan 05 23:26:24 crc kubenswrapper[5034]: I0105 23:26:24.503108 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6aa4-account-create-update-zfzvs" event={"ID":"1f6a9cba-014e-4c12-8c00-114299efe216","Type":"ContainerDied","Data":"543753a3b743f25eb204cf68013c2f230dd0958351bb3b097bce31fe1913718e"} Jan 05 23:26:24 crc kubenswrapper[5034]: I0105 23:26:24.503155 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6aa4-account-create-update-zfzvs" event={"ID":"1f6a9cba-014e-4c12-8c00-114299efe216","Type":"ContainerStarted","Data":"8deb8a3b6cab95b1f50a60a97610efdf2aeab5cd4708b05186b45e2ac27add54"} Jan 05 23:26:24 crc kubenswrapper[5034]: I0105 23:26:24.506325 5034 generic.go:334] "Generic (PLEG): container finished" podID="05bd4ed8-7e38-4469-b295-f475bf906342" containerID="25212514badf496f706007c603486cf621e4055705ad7924ef723537cdfaf38f" exitCode=0 Jan 05 23:26:24 crc kubenswrapper[5034]: I0105 23:26:24.506393 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-k4s29" event={"ID":"05bd4ed8-7e38-4469-b295-f475bf906342","Type":"ContainerDied","Data":"25212514badf496f706007c603486cf621e4055705ad7924ef723537cdfaf38f"} Jan 05 23:26:24 crc kubenswrapper[5034]: I0105 23:26:24.506434 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-k4s29" event={"ID":"05bd4ed8-7e38-4469-b295-f475bf906342","Type":"ContainerStarted","Data":"caf984dc24e45f8ef3617daf281108b9656488a788d275c038b8732d423092c7"} Jan 05 23:26:25 crc kubenswrapper[5034]: I0105 23:26:25.928029 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6aa4-account-create-update-zfzvs" Jan 05 23:26:25 crc kubenswrapper[5034]: I0105 23:26:25.941844 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-k4s29" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.011970 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05bd4ed8-7e38-4469-b295-f475bf906342-operator-scripts\") pod \"05bd4ed8-7e38-4469-b295-f475bf906342\" (UID: \"05bd4ed8-7e38-4469-b295-f475bf906342\") " Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.012116 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f6a9cba-014e-4c12-8c00-114299efe216-operator-scripts\") pod \"1f6a9cba-014e-4c12-8c00-114299efe216\" (UID: \"1f6a9cba-014e-4c12-8c00-114299efe216\") " Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.012340 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmqm9\" (UniqueName: \"kubernetes.io/projected/1f6a9cba-014e-4c12-8c00-114299efe216-kube-api-access-zmqm9\") pod \"1f6a9cba-014e-4c12-8c00-114299efe216\" (UID: \"1f6a9cba-014e-4c12-8c00-114299efe216\") " Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.012392 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm6ln\" (UniqueName: \"kubernetes.io/projected/05bd4ed8-7e38-4469-b295-f475bf906342-kube-api-access-pm6ln\") pod \"05bd4ed8-7e38-4469-b295-f475bf906342\" (UID: \"05bd4ed8-7e38-4469-b295-f475bf906342\") " Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.013036 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05bd4ed8-7e38-4469-b295-f475bf906342-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05bd4ed8-7e38-4469-b295-f475bf906342" (UID: "05bd4ed8-7e38-4469-b295-f475bf906342"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.015730 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f6a9cba-014e-4c12-8c00-114299efe216-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f6a9cba-014e-4c12-8c00-114299efe216" (UID: "1f6a9cba-014e-4c12-8c00-114299efe216"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.020183 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6a9cba-014e-4c12-8c00-114299efe216-kube-api-access-zmqm9" (OuterVolumeSpecName: "kube-api-access-zmqm9") pod "1f6a9cba-014e-4c12-8c00-114299efe216" (UID: "1f6a9cba-014e-4c12-8c00-114299efe216"). InnerVolumeSpecName "kube-api-access-zmqm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.022359 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bd4ed8-7e38-4469-b295-f475bf906342-kube-api-access-pm6ln" (OuterVolumeSpecName: "kube-api-access-pm6ln") pod "05bd4ed8-7e38-4469-b295-f475bf906342" (UID: "05bd4ed8-7e38-4469-b295-f475bf906342"). InnerVolumeSpecName "kube-api-access-pm6ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.114327 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05bd4ed8-7e38-4469-b295-f475bf906342-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.114628 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f6a9cba-014e-4c12-8c00-114299efe216-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.114642 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmqm9\" (UniqueName: \"kubernetes.io/projected/1f6a9cba-014e-4c12-8c00-114299efe216-kube-api-access-zmqm9\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.114651 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm6ln\" (UniqueName: \"kubernetes.io/projected/05bd4ed8-7e38-4469-b295-f475bf906342-kube-api-access-pm6ln\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.525881 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6aa4-account-create-update-zfzvs" event={"ID":"1f6a9cba-014e-4c12-8c00-114299efe216","Type":"ContainerDied","Data":"8deb8a3b6cab95b1f50a60a97610efdf2aeab5cd4708b05186b45e2ac27add54"} Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.525942 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8deb8a3b6cab95b1f50a60a97610efdf2aeab5cd4708b05186b45e2ac27add54" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.525901 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6aa4-account-create-update-zfzvs" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.528150 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-k4s29" event={"ID":"05bd4ed8-7e38-4469-b295-f475bf906342","Type":"ContainerDied","Data":"caf984dc24e45f8ef3617daf281108b9656488a788d275c038b8732d423092c7"} Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.528190 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf984dc24e45f8ef3617daf281108b9656488a788d275c038b8732d423092c7" Jan 05 23:26:26 crc kubenswrapper[5034]: I0105 23:26:26.528230 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-k4s29" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.538538 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-69ktk"] Jan 05 23:26:28 crc kubenswrapper[5034]: E0105 23:26:28.539314 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bd4ed8-7e38-4469-b295-f475bf906342" containerName="mariadb-database-create" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.539338 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bd4ed8-7e38-4469-b295-f475bf906342" containerName="mariadb-database-create" Jan 05 23:26:28 crc kubenswrapper[5034]: E0105 23:26:28.539377 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6a9cba-014e-4c12-8c00-114299efe216" containerName="mariadb-account-create-update" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.539383 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6a9cba-014e-4c12-8c00-114299efe216" containerName="mariadb-account-create-update" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.539570 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6a9cba-014e-4c12-8c00-114299efe216" containerName="mariadb-account-create-update" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.539591 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="05bd4ed8-7e38-4469-b295-f475bf906342" containerName="mariadb-database-create" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.540376 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.543966 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.546582 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r5jhl" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.550542 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-69ktk"] Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.674947 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-db-sync-config-data\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.675036 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-config-data\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.675324 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv8s4\" (UniqueName: \"kubernetes.io/projected/9b340388-a44b-4ce6-8a42-6b835309583d-kube-api-access-pv8s4\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.675464 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-combined-ca-bundle\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.777070 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv8s4\" (UniqueName: \"kubernetes.io/projected/9b340388-a44b-4ce6-8a42-6b835309583d-kube-api-access-pv8s4\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.777159 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-combined-ca-bundle\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.777218 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-db-sync-config-data\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.777253 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-config-data\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.784976 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-db-sync-config-data\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.785304 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-combined-ca-bundle\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.791759 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-config-data\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.797618 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv8s4\" (UniqueName: \"kubernetes.io/projected/9b340388-a44b-4ce6-8a42-6b835309583d-kube-api-access-pv8s4\") pod \"glance-db-sync-69ktk\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:28 crc kubenswrapper[5034]: I0105 23:26:28.861474 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:29 crc kubenswrapper[5034]: I0105 23:26:29.466851 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-69ktk"] Jan 05 23:26:29 crc kubenswrapper[5034]: W0105 23:26:29.478736 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b340388_a44b_4ce6_8a42_6b835309583d.slice/crio-a5b08c525832f655de1d1f9e9393c283599cc3494e37f20b4066354f9e9e813f WatchSource:0}: Error finding container a5b08c525832f655de1d1f9e9393c283599cc3494e37f20b4066354f9e9e813f: Status 404 returned error can't find the container with id a5b08c525832f655de1d1f9e9393c283599cc3494e37f20b4066354f9e9e813f Jan 05 23:26:29 crc kubenswrapper[5034]: I0105 23:26:29.559486 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-69ktk" event={"ID":"9b340388-a44b-4ce6-8a42-6b835309583d","Type":"ContainerStarted","Data":"a5b08c525832f655de1d1f9e9393c283599cc3494e37f20b4066354f9e9e813f"} Jan 05 23:26:30 crc kubenswrapper[5034]: I0105 23:26:30.569752 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-69ktk" event={"ID":"9b340388-a44b-4ce6-8a42-6b835309583d","Type":"ContainerStarted","Data":"dc20f1ffc2649b535c454a4fafa1cb891ad9004869dd5ec74de29a803c43b5d4"} Jan 05 23:26:30 crc kubenswrapper[5034]: I0105 23:26:30.597114 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-69ktk" podStartSLOduration=2.597072844 podStartE2EDuration="2.597072844s" podCreationTimestamp="2026-01-05 23:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:26:30.593354839 +0000 UTC m=+5682.965354288" watchObservedRunningTime="2026-01-05 23:26:30.597072844 +0000 UTC m=+5682.969072283" Jan 05 23:26:33 crc kubenswrapper[5034]: I0105 23:26:33.608858 5034 generic.go:334] "Generic (PLEG): container finished" podID="9b340388-a44b-4ce6-8a42-6b835309583d" containerID="dc20f1ffc2649b535c454a4fafa1cb891ad9004869dd5ec74de29a803c43b5d4" exitCode=0 Jan 05 23:26:33 crc kubenswrapper[5034]: I0105 23:26:33.609255 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-69ktk" event={"ID":"9b340388-a44b-4ce6-8a42-6b835309583d","Type":"ContainerDied","Data":"dc20f1ffc2649b535c454a4fafa1cb891ad9004869dd5ec74de29a803c43b5d4"} Jan 05 23:26:33 crc kubenswrapper[5034]: I0105 23:26:33.839139 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:26:33 crc kubenswrapper[5034]: E0105 23:26:33.839419 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.002455 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.143659 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-config-data\") pod \"9b340388-a44b-4ce6-8a42-6b835309583d\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.144175 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-combined-ca-bundle\") pod \"9b340388-a44b-4ce6-8a42-6b835309583d\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.144313 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-db-sync-config-data\") pod \"9b340388-a44b-4ce6-8a42-6b835309583d\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.144452 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv8s4\" (UniqueName: \"kubernetes.io/projected/9b340388-a44b-4ce6-8a42-6b835309583d-kube-api-access-pv8s4\") pod \"9b340388-a44b-4ce6-8a42-6b835309583d\" (UID: \"9b340388-a44b-4ce6-8a42-6b835309583d\") " Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.150272 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b340388-a44b-4ce6-8a42-6b835309583d-kube-api-access-pv8s4" (OuterVolumeSpecName: "kube-api-access-pv8s4") pod "9b340388-a44b-4ce6-8a42-6b835309583d" (UID: "9b340388-a44b-4ce6-8a42-6b835309583d"). InnerVolumeSpecName "kube-api-access-pv8s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.151027 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9b340388-a44b-4ce6-8a42-6b835309583d" (UID: "9b340388-a44b-4ce6-8a42-6b835309583d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.168281 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b340388-a44b-4ce6-8a42-6b835309583d" (UID: "9b340388-a44b-4ce6-8a42-6b835309583d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.189464 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-config-data" (OuterVolumeSpecName: "config-data") pod "9b340388-a44b-4ce6-8a42-6b835309583d" (UID: "9b340388-a44b-4ce6-8a42-6b835309583d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.246658 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.246706 5034 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.246717 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv8s4\" (UniqueName: \"kubernetes.io/projected/9b340388-a44b-4ce6-8a42-6b835309583d-kube-api-access-pv8s4\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.246728 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b340388-a44b-4ce6-8a42-6b835309583d-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.635022 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-69ktk" event={"ID":"9b340388-a44b-4ce6-8a42-6b835309583d","Type":"ContainerDied","Data":"a5b08c525832f655de1d1f9e9393c283599cc3494e37f20b4066354f9e9e813f"} Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.635068 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5b08c525832f655de1d1f9e9393c283599cc3494e37f20b4066354f9e9e813f" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.635096 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-69ktk" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.926722 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:26:35 crc kubenswrapper[5034]: E0105 23:26:35.927327 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b340388-a44b-4ce6-8a42-6b835309583d" containerName="glance-db-sync" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.927356 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b340388-a44b-4ce6-8a42-6b835309583d" containerName="glance-db-sync" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.927606 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b340388-a44b-4ce6-8a42-6b835309583d" containerName="glance-db-sync" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.930200 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.933131 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.933394 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.933504 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r5jhl" Jan 05 23:26:35 crc kubenswrapper[5034]: I0105 23:26:35.944956 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.059876 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68bd7cb495-p2wxr"] Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.061499 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.063027 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.063139 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.063186 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-logs\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.063317 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.063458 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbxj7\" (UniqueName: \"kubernetes.io/projected/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-kube-api-access-wbxj7\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.063829 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.077834 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68bd7cb495-p2wxr"] Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.166882 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-sb\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167003 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167116 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-config\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167174 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-dns-svc\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167204 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7llvv\" (UniqueName: \"kubernetes.io/projected/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-kube-api-access-7llvv\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167247 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167309 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167353 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-nb\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167390 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-logs\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167420 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167484 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbxj7\" (UniqueName: \"kubernetes.io/projected/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-kube-api-access-wbxj7\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.167717 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.168396 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-logs\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.174690 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.174994 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.188223 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.188311 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbxj7\" (UniqueName: \"kubernetes.io/projected/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-kube-api-access-wbxj7\") pod \"glance-default-external-api-0\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.231668 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.243721 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.249839 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.256487 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.257945 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.269753 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-sb\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.269870 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-config\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.269915 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-dns-svc\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.269938 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7llvv\" (UniqueName: \"kubernetes.io/projected/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-kube-api-access-7llvv\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.269987 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-nb\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.271137 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-config\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.285636 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-dns-svc\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.286006 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-nb\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.286232 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-sb\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.312736 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7llvv\" (UniqueName: \"kubernetes.io/projected/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-kube-api-access-7llvv\") pod \"dnsmasq-dns-68bd7cb495-p2wxr\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.373002 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-logs\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.373150 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.373309 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.373802 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.373957 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.374006 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2rzf\" (UniqueName: \"kubernetes.io/projected/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-kube-api-access-m2rzf\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.379819 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.475662 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.476071 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.476144 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.476171 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2rzf\" (UniqueName: \"kubernetes.io/projected/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-kube-api-access-m2rzf\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.476197 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-logs\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.476240 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.479438 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-logs\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.479509 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.485788 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.485843 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.490124 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.496842 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2rzf\" (UniqueName: \"kubernetes.io/projected/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-kube-api-access-m2rzf\") pod \"glance-default-internal-api-0\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.603642 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.941033 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:26:36 crc kubenswrapper[5034]: I0105 23:26:36.973064 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68bd7cb495-p2wxr"] Jan 05 23:26:36 crc kubenswrapper[5034]: W0105 23:26:36.986799 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd4b73dd_10a6_451a_bcc4_f64cb7fc7180.slice/crio-2dee03d72b004555c6b2840b331b2d94ba61eb5ec87e15b056f4a106a3ac8680 WatchSource:0}: Error finding container 2dee03d72b004555c6b2840b331b2d94ba61eb5ec87e15b056f4a106a3ac8680: Status 404 returned error can't find the container with id 2dee03d72b004555c6b2840b331b2d94ba61eb5ec87e15b056f4a106a3ac8680 Jan 05 23:26:37 crc kubenswrapper[5034]: I0105 23:26:37.075918 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:26:37 crc kubenswrapper[5034]: I0105 23:26:37.259437 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:26:37 crc kubenswrapper[5034]: I0105 23:26:37.676769 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e","Type":"ContainerStarted","Data":"89a126174a3617e0ef050bbabf315fef2711f259970a95a1e87a9c4d18954ec3"} Jan 05 23:26:37 crc kubenswrapper[5034]: I0105 23:26:37.679407 5034 generic.go:334] "Generic (PLEG): container finished" podID="dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" containerID="2b2821d6d7c74001485215370d3f90dc339fd45a24e60f2319d2045675228cee" exitCode=0 Jan 05 23:26:37 crc kubenswrapper[5034]: I0105 23:26:37.679540 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" event={"ID":"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180","Type":"ContainerDied","Data":"2b2821d6d7c74001485215370d3f90dc339fd45a24e60f2319d2045675228cee"} Jan 05 23:26:37 crc kubenswrapper[5034]: I0105 23:26:37.679632 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" event={"ID":"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180","Type":"ContainerStarted","Data":"2dee03d72b004555c6b2840b331b2d94ba61eb5ec87e15b056f4a106a3ac8680"} Jan 05 23:26:37 crc kubenswrapper[5034]: I0105 23:26:37.682811 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9","Type":"ContainerStarted","Data":"e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698"} Jan 05 23:26:37 crc kubenswrapper[5034]: I0105 23:26:37.682883 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9","Type":"ContainerStarted","Data":"73c8aa2f7e4a0052e031842d859bf5a395caf1a77feb03f574bc4e2abe3bda82"} Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.403796 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.695758 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" event={"ID":"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180","Type":"ContainerStarted","Data":"019b8ad9abe7b40563d3f5bea32e9773a4959f4c637269f1e9224a422b41d7d5"} Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.697400 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.701233 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" containerName="glance-log" containerID="cri-o://e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698" gracePeriod=30 Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.701564 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9","Type":"ContainerStarted","Data":"ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c"} Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.701733 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" containerName="glance-httpd" containerID="cri-o://ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c" gracePeriod=30 Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.705831 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e","Type":"ContainerStarted","Data":"5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c"} Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.705978 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" containerName="glance-log" containerID="cri-o://fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2" gracePeriod=30 Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.706065 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e","Type":"ContainerStarted","Data":"fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2"} Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.706048 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" containerName="glance-httpd" containerID="cri-o://5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c" gracePeriod=30 Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.724441 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" podStartSLOduration=2.724412114 podStartE2EDuration="2.724412114s" podCreationTimestamp="2026-01-05 23:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:26:38.71794596 +0000 UTC m=+5691.089945399" watchObservedRunningTime="2026-01-05 23:26:38.724412114 +0000 UTC m=+5691.096411553" Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.748014 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.7479854230000003 podStartE2EDuration="3.747985423s" podCreationTimestamp="2026-01-05 23:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:26:38.741635193 +0000 UTC m=+5691.113634632" watchObservedRunningTime="2026-01-05 23:26:38.747985423 +0000 UTC m=+5691.119984862" Jan 05 23:26:38 crc kubenswrapper[5034]: I0105 23:26:38.775303 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.775272817 podStartE2EDuration="2.775272817s" podCreationTimestamp="2026-01-05 23:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:26:38.77148252 +0000 UTC m=+5691.143481959" watchObservedRunningTime="2026-01-05 23:26:38.775272817 +0000 UTC m=+5691.147272256" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.381473 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.387653 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.495014 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-config-data\") pod \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.495072 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-httpd-run\") pod \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.495104 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-httpd-run\") pod \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.495243 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2rzf\" (UniqueName: \"kubernetes.io/projected/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-kube-api-access-m2rzf\") pod \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.495271 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-config-data\") pod \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.495301 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbxj7\" (UniqueName: \"kubernetes.io/projected/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-kube-api-access-wbxj7\") pod \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.495348 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-combined-ca-bundle\") pod \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.495402 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-logs\") pod \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.495437 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-combined-ca-bundle\") pod \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.497071 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" (UID: "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.497327 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-logs" (OuterVolumeSpecName: "logs") pod "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" (UID: "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.497523 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" (UID: "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.497973 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-scripts\") pod \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.498267 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-logs\") pod \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\" (UID: \"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.498300 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-scripts\") pod \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\" (UID: \"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9\") " Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.498629 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-logs" (OuterVolumeSpecName: "logs") pod "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" (UID: "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.499888 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.499919 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.499933 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.499946 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.503262 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-kube-api-access-m2rzf" (OuterVolumeSpecName: "kube-api-access-m2rzf") pod "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" (UID: "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e"). InnerVolumeSpecName "kube-api-access-m2rzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.503427 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-kube-api-access-wbxj7" (OuterVolumeSpecName: "kube-api-access-wbxj7") pod "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" (UID: "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9"). InnerVolumeSpecName "kube-api-access-wbxj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.504062 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-scripts" (OuterVolumeSpecName: "scripts") pod "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" (UID: "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.504699 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-scripts" (OuterVolumeSpecName: "scripts") pod "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" (UID: "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.529606 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" (UID: "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.557437 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" (UID: "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.571265 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-config-data" (OuterVolumeSpecName: "config-data") pod "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" (UID: "948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.589741 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-config-data" (OuterVolumeSpecName: "config-data") pod "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" (UID: "cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.601874 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.601911 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbxj7\" (UniqueName: \"kubernetes.io/projected/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-kube-api-access-wbxj7\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.601925 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.601935 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.601945 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.601953 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.601961 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.601972 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2rzf\" (UniqueName: \"kubernetes.io/projected/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e-kube-api-access-m2rzf\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.718352 5034 generic.go:334] "Generic (PLEG): container finished" podID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" containerID="ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c" exitCode=0 Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.718386 5034 generic.go:334] "Generic (PLEG): container finished" podID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" containerID="e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698" exitCode=143 Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.718431 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9","Type":"ContainerDied","Data":"ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c"} Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.718463 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9","Type":"ContainerDied","Data":"e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698"} Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.718457 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.718486 5034 scope.go:117] "RemoveContainer" containerID="ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.718472 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9","Type":"ContainerDied","Data":"73c8aa2f7e4a0052e031842d859bf5a395caf1a77feb03f574bc4e2abe3bda82"} Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.722997 5034 generic.go:334] "Generic (PLEG): container finished" podID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" containerID="5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c" exitCode=143 Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.723043 5034 generic.go:334] "Generic (PLEG): container finished" podID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" containerID="fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2" exitCode=143 Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.724388 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.724634 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e","Type":"ContainerDied","Data":"5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c"} Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.724685 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e","Type":"ContainerDied","Data":"fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2"} Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.724723 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e","Type":"ContainerDied","Data":"89a126174a3617e0ef050bbabf315fef2711f259970a95a1e87a9c4d18954ec3"} Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.759890 5034 scope.go:117] "RemoveContainer" containerID="e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.788854 5034 scope.go:117] "RemoveContainer" containerID="ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c" Jan 05 23:26:39 crc kubenswrapper[5034]: E0105 23:26:39.789550 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c\": container with ID starting with ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c not found: ID does not exist" containerID="ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.789604 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c"} err="failed to get container status \"ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c\": rpc error: code = NotFound desc = could not find container \"ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c\": container with ID starting with ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c not found: ID does not exist" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.789637 5034 scope.go:117] "RemoveContainer" containerID="e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698" Jan 05 23:26:39 crc kubenswrapper[5034]: E0105 23:26:39.790258 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698\": container with ID starting with e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698 not found: ID does not exist" containerID="e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.790314 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698"} err="failed to get container status \"e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698\": rpc error: code = NotFound desc = could not find container \"e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698\": container with ID starting with e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698 not found: ID does not exist" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.790355 5034 scope.go:117] "RemoveContainer" containerID="ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.790811 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c"} err="failed to get container status \"ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c\": rpc error: code = NotFound desc = could not find container \"ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c\": container with ID starting with ade9ef471c7002a54005279a9d42061488686bceb65090f009e730923a866b3c not found: ID does not exist" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.790871 5034 scope.go:117] "RemoveContainer" containerID="e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.791278 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698"} err="failed to get container status \"e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698\": rpc error: code = NotFound desc = could not find container \"e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698\": container with ID starting with e2e0e8513972df46f3a8a346d39e01773a9c2a6e6795fdbf1dc94c1f62bbf698 not found: ID does not exist" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.791300 5034 scope.go:117] "RemoveContainer" containerID="5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.808938 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.831767 5034 scope.go:117] "RemoveContainer" containerID="fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.868924 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.868970 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.871197 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.879054 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:26:39 crc kubenswrapper[5034]: E0105 23:26:39.879596 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" containerName="glance-log" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.879622 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" containerName="glance-log" Jan 05 23:26:39 crc kubenswrapper[5034]: E0105 23:26:39.879644 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" containerName="glance-httpd" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.879651 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" containerName="glance-httpd" Jan 05 23:26:39 crc kubenswrapper[5034]: E0105 23:26:39.879668 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" containerName="glance-log" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.879675 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" containerName="glance-log" Jan 05 23:26:39 crc kubenswrapper[5034]: E0105 23:26:39.879694 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" containerName="glance-httpd" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.879699 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" containerName="glance-httpd" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.879863 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" containerName="glance-httpd" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.879880 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" containerName="glance-log" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.879897 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" containerName="glance-httpd" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.879906 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" containerName="glance-log" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.881191 5034 scope.go:117] "RemoveContainer" containerID="5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.882204 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:39 crc kubenswrapper[5034]: E0105 23:26:39.883162 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c\": container with ID starting with 5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c not found: ID does not exist" containerID="5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.883216 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c"} err="failed to get container status \"5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c\": rpc error: code = NotFound desc = could not find container \"5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c\": container with ID starting with 5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c not found: ID does not exist" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.883255 5034 scope.go:117] "RemoveContainer" containerID="fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.884654 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 05 23:26:39 crc kubenswrapper[5034]: E0105 23:26:39.884808 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2\": container with ID starting with fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2 not found: ID does not exist" containerID="fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.884846 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2"} err="failed to get container status \"fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2\": rpc error: code = NotFound desc = could not find container \"fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2\": container with ID starting with fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2 not found: ID does not exist" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.884863 5034 scope.go:117] "RemoveContainer" containerID="5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.885054 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r5jhl" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.886042 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.887394 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c"} err="failed to get container status \"5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c\": rpc error: code = NotFound desc = could not find container \"5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c\": container with ID starting with 5edf689d1b9284698beebedfca39072268de46081110994708ec8450f0fc262c not found: ID does not exist" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.887450 5034 scope.go:117] "RemoveContainer" containerID="fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.890123 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.890288 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.892121 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2"} err="failed to get container status \"fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2\": rpc error: code = NotFound desc = could not find container \"fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2\": container with ID starting with fda30342dff547656fa4aac3366fdcf412dae132c853de6a948660e9d15080d2 not found: ID does not exist" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.893224 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.895478 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.897824 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.901458 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:26:39 crc kubenswrapper[5034]: I0105 23:26:39.909720 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.013692 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvq4w\" (UniqueName: \"kubernetes.io/projected/46cfc4db-fd97-43d5-b21e-39d6059528a2-kube-api-access-vvq4w\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014216 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014259 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014312 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014342 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014368 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-logs\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014522 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014592 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014624 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpqd\" (UniqueName: \"kubernetes.io/projected/5a290e39-d771-4f71-9568-489221fc4570-kube-api-access-9cpqd\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014787 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014821 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.014949 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.015000 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.015062 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117549 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117599 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117640 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117666 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117692 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117742 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvq4w\" (UniqueName: \"kubernetes.io/projected/46cfc4db-fd97-43d5-b21e-39d6059528a2-kube-api-access-vvq4w\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117763 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117790 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117834 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117860 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117878 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-logs\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117906 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117926 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.117947 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cpqd\" (UniqueName: \"kubernetes.io/projected/5a290e39-d771-4f71-9568-489221fc4570-kube-api-access-9cpqd\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.118845 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.118904 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.119197 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.119557 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-logs\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.124807 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.124841 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.124920 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.126004 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.132289 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.133840 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.134060 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.134565 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.139363 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvq4w\" (UniqueName: \"kubernetes.io/projected/46cfc4db-fd97-43d5-b21e-39d6059528a2-kube-api-access-vvq4w\") pod \"glance-default-internal-api-0\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.156900 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cpqd\" (UniqueName: \"kubernetes.io/projected/5a290e39-d771-4f71-9568-489221fc4570-kube-api-access-9cpqd\") pod \"glance-default-external-api-0\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.212919 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.229445 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 23:26:40 crc kubenswrapper[5034]: I0105 23:26:40.915037 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:26:41 crc kubenswrapper[5034]: I0105 23:26:41.021986 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:26:41 crc kubenswrapper[5034]: W0105 23:26:41.025707 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a290e39_d771_4f71_9568_489221fc4570.slice/crio-cf2d2d84a47dd4b9e1c9f4a0428c87a56c45db8f9c8b203be8b0088347b92da6 WatchSource:0}: Error finding container cf2d2d84a47dd4b9e1c9f4a0428c87a56c45db8f9c8b203be8b0088347b92da6: Status 404 returned error can't find the container with id cf2d2d84a47dd4b9e1c9f4a0428c87a56c45db8f9c8b203be8b0088347b92da6 Jan 05 23:26:41 crc kubenswrapper[5034]: I0105 23:26:41.753781 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46cfc4db-fd97-43d5-b21e-39d6059528a2","Type":"ContainerStarted","Data":"f2c3e58355fc4aeeedb44875a1397f73b9aa25aeebd23b3218cd077c5a3328cd"} Jan 05 23:26:41 crc kubenswrapper[5034]: I0105 23:26:41.754259 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46cfc4db-fd97-43d5-b21e-39d6059528a2","Type":"ContainerStarted","Data":"e62c63e63ac639a77d0afe9cf32b785959a728eb0e35e7efe060d88877d17d7f"} Jan 05 23:26:41 crc kubenswrapper[5034]: I0105 23:26:41.757725 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a290e39-d771-4f71-9568-489221fc4570","Type":"ContainerStarted","Data":"cf2d2d84a47dd4b9e1c9f4a0428c87a56c45db8f9c8b203be8b0088347b92da6"} Jan 05 23:26:41 crc kubenswrapper[5034]: I0105 23:26:41.853629 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e" path="/var/lib/kubelet/pods/948d1fd7-d0bd-4e84-9b2e-5bd60594cb3e/volumes" Jan 05 23:26:41 crc kubenswrapper[5034]: I0105 23:26:41.854828 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9" path="/var/lib/kubelet/pods/cd7cbbcd-901d-426c-8da6-6f9ce3f13cf9/volumes" Jan 05 23:26:42 crc kubenswrapper[5034]: I0105 23:26:42.768339 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46cfc4db-fd97-43d5-b21e-39d6059528a2","Type":"ContainerStarted","Data":"818f8dadc8311256610a142e506fba4ca86dd84aee1a66ac132afb8416098e21"} Jan 05 23:26:42 crc kubenswrapper[5034]: I0105 23:26:42.771444 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a290e39-d771-4f71-9568-489221fc4570","Type":"ContainerStarted","Data":"fedd32de1ea0dee23eed370e7e7aca2cd52d121f835ef2dec83aa400a5de5b65"} Jan 05 23:26:42 crc kubenswrapper[5034]: I0105 23:26:42.771498 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a290e39-d771-4f71-9568-489221fc4570","Type":"ContainerStarted","Data":"ad1fc8f8e22ddfbc234556389fe8a1a5a7cd43b27939001523a85ba87750a806"} Jan 05 23:26:42 crc kubenswrapper[5034]: I0105 23:26:42.819395 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.819365006 podStartE2EDuration="3.819365006s" podCreationTimestamp="2026-01-05 23:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:26:42.793179043 +0000 UTC m=+5695.165178482" watchObservedRunningTime="2026-01-05 23:26:42.819365006 +0000 UTC m=+5695.191364465" Jan 05 23:26:42 crc kubenswrapper[5034]: I0105 23:26:42.821501 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.821490186 podStartE2EDuration="3.821490186s" podCreationTimestamp="2026-01-05 23:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:26:42.817766701 +0000 UTC m=+5695.189766140" watchObservedRunningTime="2026-01-05 23:26:42.821490186 +0000 UTC m=+5695.193489635" Jan 05 23:26:44 crc kubenswrapper[5034]: I0105 23:26:44.839825 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:26:44 crc kubenswrapper[5034]: E0105 23:26:44.840713 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.381113 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.450492 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795bf77d9c-mrt84"] Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.450781 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" podUID="36a04d15-352d-499c-a8e8-3ca3d15dd13b" containerName="dnsmasq-dns" containerID="cri-o://1cc2b81dd8a2b6e157ee5960d3cb640d865c44c0cfc16cfb5b09d81b3f27fff6" gracePeriod=10 Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.836608 5034 generic.go:334] "Generic (PLEG): container finished" podID="36a04d15-352d-499c-a8e8-3ca3d15dd13b" containerID="1cc2b81dd8a2b6e157ee5960d3cb640d865c44c0cfc16cfb5b09d81b3f27fff6" exitCode=0 Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.837001 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" event={"ID":"36a04d15-352d-499c-a8e8-3ca3d15dd13b","Type":"ContainerDied","Data":"1cc2b81dd8a2b6e157ee5960d3cb640d865c44c0cfc16cfb5b09d81b3f27fff6"} Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.942917 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.973059 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-sb\") pod \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.973201 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-config\") pod \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.973416 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7hrw\" (UniqueName: \"kubernetes.io/projected/36a04d15-352d-499c-a8e8-3ca3d15dd13b-kube-api-access-f7hrw\") pod \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.973535 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-nb\") pod \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " Jan 05 23:26:46 crc kubenswrapper[5034]: I0105 23:26:46.973616 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-dns-svc\") pod \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\" (UID: \"36a04d15-352d-499c-a8e8-3ca3d15dd13b\") " Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.002772 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a04d15-352d-499c-a8e8-3ca3d15dd13b-kube-api-access-f7hrw" (OuterVolumeSpecName: "kube-api-access-f7hrw") pod "36a04d15-352d-499c-a8e8-3ca3d15dd13b" (UID: "36a04d15-352d-499c-a8e8-3ca3d15dd13b"). InnerVolumeSpecName "kube-api-access-f7hrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.040182 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36a04d15-352d-499c-a8e8-3ca3d15dd13b" (UID: "36a04d15-352d-499c-a8e8-3ca3d15dd13b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.048278 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36a04d15-352d-499c-a8e8-3ca3d15dd13b" (UID: "36a04d15-352d-499c-a8e8-3ca3d15dd13b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.049291 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-config" (OuterVolumeSpecName: "config") pod "36a04d15-352d-499c-a8e8-3ca3d15dd13b" (UID: "36a04d15-352d-499c-a8e8-3ca3d15dd13b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.056721 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36a04d15-352d-499c-a8e8-3ca3d15dd13b" (UID: "36a04d15-352d-499c-a8e8-3ca3d15dd13b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.075707 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7hrw\" (UniqueName: \"kubernetes.io/projected/36a04d15-352d-499c-a8e8-3ca3d15dd13b-kube-api-access-f7hrw\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.075750 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.075761 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.075771 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.075779 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36a04d15-352d-499c-a8e8-3ca3d15dd13b-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.849391 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.850100 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795bf77d9c-mrt84" event={"ID":"36a04d15-352d-499c-a8e8-3ca3d15dd13b","Type":"ContainerDied","Data":"1f66add5a336ce6a20ca0325e2401c0d55d42c5d2b7a0486ec950af0091feb6c"} Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.850154 5034 scope.go:117] "RemoveContainer" containerID="1cc2b81dd8a2b6e157ee5960d3cb640d865c44c0cfc16cfb5b09d81b3f27fff6" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.889034 5034 scope.go:117] "RemoveContainer" containerID="99f0ff6920366930dd6792706572b0fcb6db8d7f0bd7447974b55bd91ecb97d0" Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.907820 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795bf77d9c-mrt84"] Jan 05 23:26:47 crc kubenswrapper[5034]: I0105 23:26:47.918439 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795bf77d9c-mrt84"] Jan 05 23:26:49 crc kubenswrapper[5034]: I0105 23:26:49.848655 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a04d15-352d-499c-a8e8-3ca3d15dd13b" path="/var/lib/kubelet/pods/36a04d15-352d-499c-a8e8-3ca3d15dd13b/volumes" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.213587 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.213650 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.230566 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.230629 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.241527 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.253306 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.259892 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.285769 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.874310 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.874500 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.874528 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 23:26:50 crc kubenswrapper[5034]: I0105 23:26:50.874542 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:52 crc kubenswrapper[5034]: I0105 23:26:52.847504 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:52 crc kubenswrapper[5034]: I0105 23:26:52.853676 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 23:26:52 crc kubenswrapper[5034]: I0105 23:26:52.884504 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 23:26:52 crc kubenswrapper[5034]: I0105 23:26:52.892652 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 23:26:52 crc kubenswrapper[5034]: I0105 23:26:52.995759 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 23:26:58 crc kubenswrapper[5034]: I0105 23:26:58.839753 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:26:58 crc kubenswrapper[5034]: E0105 23:26:58.840685 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:26:59 crc kubenswrapper[5034]: I0105 23:26:59.694037 5034 scope.go:117] "RemoveContainer" containerID="6b99077d759d95bfe2ea9a5bca8c399631cdd895d2593f5157ab81360499c4b1" Jan 05 23:26:59 crc kubenswrapper[5034]: I0105 23:26:59.721164 5034 scope.go:117] "RemoveContainer" containerID="23587fb5548b4f93872185f7d12ac56ec53db9bdf33699b9a5ad094605f7938a" Jan 05 23:26:59 crc kubenswrapper[5034]: I0105 23:26:59.738613 5034 scope.go:117] "RemoveContainer" containerID="105873b758b91d7e799060493683cdf41aac458ae64818dd4b2c31527a5af0a4" Jan 05 23:27:01 crc kubenswrapper[5034]: I0105 23:27:01.956796 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-sjpgk"] Jan 05 23:27:01 crc kubenswrapper[5034]: E0105 23:27:01.957473 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a04d15-352d-499c-a8e8-3ca3d15dd13b" containerName="init" Jan 05 23:27:01 crc kubenswrapper[5034]: I0105 23:27:01.957487 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a04d15-352d-499c-a8e8-3ca3d15dd13b" containerName="init" Jan 05 23:27:01 crc kubenswrapper[5034]: E0105 23:27:01.957514 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a04d15-352d-499c-a8e8-3ca3d15dd13b" containerName="dnsmasq-dns" Jan 05 23:27:01 crc kubenswrapper[5034]: I0105 23:27:01.957520 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a04d15-352d-499c-a8e8-3ca3d15dd13b" containerName="dnsmasq-dns" Jan 05 23:27:01 crc kubenswrapper[5034]: I0105 23:27:01.958045 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a04d15-352d-499c-a8e8-3ca3d15dd13b" containerName="dnsmasq-dns" Jan 05 23:27:01 crc kubenswrapper[5034]: I0105 23:27:01.958803 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sjpgk" Jan 05 23:27:01 crc kubenswrapper[5034]: I0105 23:27:01.968023 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sjpgk"] Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.059599 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97147617-d53b-41e7-871a-218080186366-operator-scripts\") pod \"placement-db-create-sjpgk\" (UID: \"97147617-d53b-41e7-871a-218080186366\") " pod="openstack/placement-db-create-sjpgk" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.059687 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s42x\" (UniqueName: \"kubernetes.io/projected/97147617-d53b-41e7-871a-218080186366-kube-api-access-2s42x\") pod \"placement-db-create-sjpgk\" (UID: \"97147617-d53b-41e7-871a-218080186366\") " pod="openstack/placement-db-create-sjpgk" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.060898 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-833e-account-create-update-skxfj"] Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.062449 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-833e-account-create-update-skxfj" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.065281 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.071858 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-833e-account-create-update-skxfj"] Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.161459 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70255284-3c60-4afa-afae-d8e33b86d065-operator-scripts\") pod \"placement-833e-account-create-update-skxfj\" (UID: \"70255284-3c60-4afa-afae-d8e33b86d065\") " pod="openstack/placement-833e-account-create-update-skxfj" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.161590 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97147617-d53b-41e7-871a-218080186366-operator-scripts\") pod \"placement-db-create-sjpgk\" (UID: \"97147617-d53b-41e7-871a-218080186366\") " pod="openstack/placement-db-create-sjpgk" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.161647 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s42x\" (UniqueName: \"kubernetes.io/projected/97147617-d53b-41e7-871a-218080186366-kube-api-access-2s42x\") pod \"placement-db-create-sjpgk\" (UID: \"97147617-d53b-41e7-871a-218080186366\") " pod="openstack/placement-db-create-sjpgk" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.161682 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nlcq\" (UniqueName: \"kubernetes.io/projected/70255284-3c60-4afa-afae-d8e33b86d065-kube-api-access-2nlcq\") pod \"placement-833e-account-create-update-skxfj\" (UID: \"70255284-3c60-4afa-afae-d8e33b86d065\") " pod="openstack/placement-833e-account-create-update-skxfj" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.162508 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97147617-d53b-41e7-871a-218080186366-operator-scripts\") pod \"placement-db-create-sjpgk\" (UID: \"97147617-d53b-41e7-871a-218080186366\") " pod="openstack/placement-db-create-sjpgk" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.189643 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s42x\" (UniqueName: \"kubernetes.io/projected/97147617-d53b-41e7-871a-218080186366-kube-api-access-2s42x\") pod \"placement-db-create-sjpgk\" (UID: \"97147617-d53b-41e7-871a-218080186366\") " pod="openstack/placement-db-create-sjpgk" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.263293 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nlcq\" (UniqueName: \"kubernetes.io/projected/70255284-3c60-4afa-afae-d8e33b86d065-kube-api-access-2nlcq\") pod \"placement-833e-account-create-update-skxfj\" (UID: \"70255284-3c60-4afa-afae-d8e33b86d065\") " pod="openstack/placement-833e-account-create-update-skxfj" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.263373 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70255284-3c60-4afa-afae-d8e33b86d065-operator-scripts\") pod \"placement-833e-account-create-update-skxfj\" (UID: \"70255284-3c60-4afa-afae-d8e33b86d065\") " pod="openstack/placement-833e-account-create-update-skxfj" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.264266 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70255284-3c60-4afa-afae-d8e33b86d065-operator-scripts\") pod \"placement-833e-account-create-update-skxfj\" (UID: \"70255284-3c60-4afa-afae-d8e33b86d065\") " pod="openstack/placement-833e-account-create-update-skxfj" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.275805 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sjpgk" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.286758 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nlcq\" (UniqueName: \"kubernetes.io/projected/70255284-3c60-4afa-afae-d8e33b86d065-kube-api-access-2nlcq\") pod \"placement-833e-account-create-update-skxfj\" (UID: \"70255284-3c60-4afa-afae-d8e33b86d065\") " pod="openstack/placement-833e-account-create-update-skxfj" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.397150 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-833e-account-create-update-skxfj" Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.924330 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sjpgk"] Jan 05 23:27:02 crc kubenswrapper[5034]: I0105 23:27:02.977276 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sjpgk" event={"ID":"97147617-d53b-41e7-871a-218080186366","Type":"ContainerStarted","Data":"0083f339425713fed622e4e7e889b3882b9d721187bfb77fe8d4e88788f91a49"} Jan 05 23:27:03 crc kubenswrapper[5034]: I0105 23:27:03.110382 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-833e-account-create-update-skxfj"] Jan 05 23:27:03 crc kubenswrapper[5034]: W0105 23:27:03.119209 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70255284_3c60_4afa_afae_d8e33b86d065.slice/crio-649938974dd8b1f1c9ff90562c0c6eb21e56de4bb63bbe04a9bfaca021cdf339 WatchSource:0}: Error finding container 649938974dd8b1f1c9ff90562c0c6eb21e56de4bb63bbe04a9bfaca021cdf339: Status 404 returned error can't find the container with id 649938974dd8b1f1c9ff90562c0c6eb21e56de4bb63bbe04a9bfaca021cdf339 Jan 05 23:27:03 crc kubenswrapper[5034]: I0105 23:27:03.991144 5034 generic.go:334] "Generic (PLEG): container finished" podID="70255284-3c60-4afa-afae-d8e33b86d065" containerID="0b9af151b6bda7bb0a4091f8cda4574e9b6c881ebb0ed1a95f7cc8a61ce5dc48" exitCode=0 Jan 05 23:27:03 crc kubenswrapper[5034]: I0105 23:27:03.991247 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-833e-account-create-update-skxfj" event={"ID":"70255284-3c60-4afa-afae-d8e33b86d065","Type":"ContainerDied","Data":"0b9af151b6bda7bb0a4091f8cda4574e9b6c881ebb0ed1a95f7cc8a61ce5dc48"} Jan 05 23:27:03 crc kubenswrapper[5034]: I0105 23:27:03.991546 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-833e-account-create-update-skxfj" event={"ID":"70255284-3c60-4afa-afae-d8e33b86d065","Type":"ContainerStarted","Data":"649938974dd8b1f1c9ff90562c0c6eb21e56de4bb63bbe04a9bfaca021cdf339"} Jan 05 23:27:03 crc kubenswrapper[5034]: I0105 23:27:03.992878 5034 generic.go:334] "Generic (PLEG): container finished" podID="97147617-d53b-41e7-871a-218080186366" containerID="af0c1a41334f2803ecff6ba52350d932e39a2ca2c9936ca0ab7f2434ce1b9699" exitCode=0 Jan 05 23:27:03 crc kubenswrapper[5034]: I0105 23:27:03.992909 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sjpgk" event={"ID":"97147617-d53b-41e7-871a-218080186366","Type":"ContainerDied","Data":"af0c1a41334f2803ecff6ba52350d932e39a2ca2c9936ca0ab7f2434ce1b9699"} Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.232311 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qn4md"] Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.234929 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.276352 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qn4md"] Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.410495 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-utilities\") pod \"community-operators-qn4md\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.410581 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8q6b\" (UniqueName: \"kubernetes.io/projected/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-kube-api-access-g8q6b\") pod \"community-operators-qn4md\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.410601 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-catalog-content\") pod \"community-operators-qn4md\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.512942 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-utilities\") pod \"community-operators-qn4md\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.513032 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8q6b\" (UniqueName: \"kubernetes.io/projected/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-kube-api-access-g8q6b\") pod \"community-operators-qn4md\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.513060 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-catalog-content\") pod \"community-operators-qn4md\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.513694 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-catalog-content\") pod \"community-operators-qn4md\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.513847 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-utilities\") pod \"community-operators-qn4md\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.536915 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8q6b\" (UniqueName: \"kubernetes.io/projected/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-kube-api-access-g8q6b\") pod \"community-operators-qn4md\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:04 crc kubenswrapper[5034]: I0105 23:27:04.625512 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.169610 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qn4md"] Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.424496 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-833e-account-create-update-skxfj" Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.432771 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sjpgk" Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.532336 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70255284-3c60-4afa-afae-d8e33b86d065-operator-scripts\") pod \"70255284-3c60-4afa-afae-d8e33b86d065\" (UID: \"70255284-3c60-4afa-afae-d8e33b86d065\") " Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.532427 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nlcq\" (UniqueName: \"kubernetes.io/projected/70255284-3c60-4afa-afae-d8e33b86d065-kube-api-access-2nlcq\") pod \"70255284-3c60-4afa-afae-d8e33b86d065\" (UID: \"70255284-3c60-4afa-afae-d8e33b86d065\") " Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.532496 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97147617-d53b-41e7-871a-218080186366-operator-scripts\") pod \"97147617-d53b-41e7-871a-218080186366\" (UID: \"97147617-d53b-41e7-871a-218080186366\") " Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.532553 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s42x\" (UniqueName: \"kubernetes.io/projected/97147617-d53b-41e7-871a-218080186366-kube-api-access-2s42x\") pod \"97147617-d53b-41e7-871a-218080186366\" (UID: \"97147617-d53b-41e7-871a-218080186366\") " Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.533842 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70255284-3c60-4afa-afae-d8e33b86d065-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70255284-3c60-4afa-afae-d8e33b86d065" (UID: "70255284-3c60-4afa-afae-d8e33b86d065"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.534686 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97147617-d53b-41e7-871a-218080186366-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97147617-d53b-41e7-871a-218080186366" (UID: "97147617-d53b-41e7-871a-218080186366"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.557105 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70255284-3c60-4afa-afae-d8e33b86d065-kube-api-access-2nlcq" (OuterVolumeSpecName: "kube-api-access-2nlcq") pod "70255284-3c60-4afa-afae-d8e33b86d065" (UID: "70255284-3c60-4afa-afae-d8e33b86d065"). InnerVolumeSpecName "kube-api-access-2nlcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.557284 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97147617-d53b-41e7-871a-218080186366-kube-api-access-2s42x" (OuterVolumeSpecName: "kube-api-access-2s42x") pod "97147617-d53b-41e7-871a-218080186366" (UID: "97147617-d53b-41e7-871a-218080186366"). InnerVolumeSpecName "kube-api-access-2s42x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.635257 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70255284-3c60-4afa-afae-d8e33b86d065-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.635296 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nlcq\" (UniqueName: \"kubernetes.io/projected/70255284-3c60-4afa-afae-d8e33b86d065-kube-api-access-2nlcq\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.635308 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97147617-d53b-41e7-871a-218080186366-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:05 crc kubenswrapper[5034]: I0105 23:27:05.635318 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s42x\" (UniqueName: \"kubernetes.io/projected/97147617-d53b-41e7-871a-218080186366-kube-api-access-2s42x\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:06 crc kubenswrapper[5034]: I0105 23:27:06.010218 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-833e-account-create-update-skxfj" Jan 05 23:27:06 crc kubenswrapper[5034]: I0105 23:27:06.010242 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-833e-account-create-update-skxfj" event={"ID":"70255284-3c60-4afa-afae-d8e33b86d065","Type":"ContainerDied","Data":"649938974dd8b1f1c9ff90562c0c6eb21e56de4bb63bbe04a9bfaca021cdf339"} Jan 05 23:27:06 crc kubenswrapper[5034]: I0105 23:27:06.010586 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="649938974dd8b1f1c9ff90562c0c6eb21e56de4bb63bbe04a9bfaca021cdf339" Jan 05 23:27:06 crc kubenswrapper[5034]: I0105 23:27:06.012200 5034 generic.go:334] "Generic (PLEG): container finished" podID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerID="9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00" exitCode=0 Jan 05 23:27:06 crc kubenswrapper[5034]: I0105 23:27:06.012243 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn4md" event={"ID":"d7d4cd7e-0a84-400e-bc97-ab7d5c549837","Type":"ContainerDied","Data":"9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00"} Jan 05 23:27:06 crc kubenswrapper[5034]: I0105 23:27:06.012382 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn4md" event={"ID":"d7d4cd7e-0a84-400e-bc97-ab7d5c549837","Type":"ContainerStarted","Data":"4ef939010403c0043ebe0b0a9cda40084e2619737a15c5be49c0f6792afa8aa1"} Jan 05 23:27:06 crc kubenswrapper[5034]: I0105 23:27:06.014642 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sjpgk" event={"ID":"97147617-d53b-41e7-871a-218080186366","Type":"ContainerDied","Data":"0083f339425713fed622e4e7e889b3882b9d721187bfb77fe8d4e88788f91a49"} Jan 05 23:27:06 crc kubenswrapper[5034]: I0105 23:27:06.014685 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0083f339425713fed622e4e7e889b3882b9d721187bfb77fe8d4e88788f91a49" Jan 05 23:27:06 crc kubenswrapper[5034]: I0105 23:27:06.014754 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sjpgk" Jan 05 23:27:06 crc kubenswrapper[5034]: I0105 23:27:06.014951 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.027259 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn4md" event={"ID":"d7d4cd7e-0a84-400e-bc97-ab7d5c549837","Type":"ContainerStarted","Data":"7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372"} Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.455844 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d77b7579-w6pzc"] Jan 05 23:27:07 crc kubenswrapper[5034]: E0105 23:27:07.456437 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97147617-d53b-41e7-871a-218080186366" containerName="mariadb-database-create" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.456458 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="97147617-d53b-41e7-871a-218080186366" containerName="mariadb-database-create" Jan 05 23:27:07 crc kubenswrapper[5034]: E0105 23:27:07.456480 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70255284-3c60-4afa-afae-d8e33b86d065" containerName="mariadb-account-create-update" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.456488 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="70255284-3c60-4afa-afae-d8e33b86d065" containerName="mariadb-account-create-update" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.456753 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="70255284-3c60-4afa-afae-d8e33b86d065" containerName="mariadb-account-create-update" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.456782 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="97147617-d53b-41e7-871a-218080186366" containerName="mariadb-database-create" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.458323 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.492552 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d77b7579-w6pzc"] Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.505298 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ztkxg"] Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.506775 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.508569 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.508722 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.508773 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rwh2k" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.548825 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ztkxg"] Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.578831 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-config\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.578940 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-nb\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.578998 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-dns-svc\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.579044 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-887bn\" (UniqueName: \"kubernetes.io/projected/95c503a0-b4db-4e28-913a-830f750ebe0a-kube-api-access-887bn\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.579799 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-sb\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.683872 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-dns-svc\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.683952 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-887bn\" (UniqueName: \"kubernetes.io/projected/95c503a0-b4db-4e28-913a-830f750ebe0a-kube-api-access-887bn\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.683982 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-scripts\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.684022 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-sb\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.684063 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b217503b-88f8-4c9c-a62a-98fa4c2708fa-logs\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.684429 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-combined-ca-bundle\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.684490 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlvlh\" (UniqueName: \"kubernetes.io/projected/b217503b-88f8-4c9c-a62a-98fa4c2708fa-kube-api-access-wlvlh\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.684523 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-config\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.684553 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-config-data\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.684582 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-nb\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.685207 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-dns-svc\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.685268 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-sb\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.685283 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-nb\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.685373 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-config\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.703000 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-887bn\" (UniqueName: \"kubernetes.io/projected/95c503a0-b4db-4e28-913a-830f750ebe0a-kube-api-access-887bn\") pod \"dnsmasq-dns-59d77b7579-w6pzc\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.786144 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-scripts\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.786551 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b217503b-88f8-4c9c-a62a-98fa4c2708fa-logs\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.786590 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-combined-ca-bundle\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.786642 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlvlh\" (UniqueName: \"kubernetes.io/projected/b217503b-88f8-4c9c-a62a-98fa4c2708fa-kube-api-access-wlvlh\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.786702 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-config-data\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.787741 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b217503b-88f8-4c9c-a62a-98fa4c2708fa-logs\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.790782 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-combined-ca-bundle\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.791796 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-config-data\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.796049 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.803618 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-scripts\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.804384 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlvlh\" (UniqueName: \"kubernetes.io/projected/b217503b-88f8-4c9c-a62a-98fa4c2708fa-kube-api-access-wlvlh\") pod \"placement-db-sync-ztkxg\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:07 crc kubenswrapper[5034]: I0105 23:27:07.841103 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:08 crc kubenswrapper[5034]: I0105 23:27:08.045543 5034 generic.go:334] "Generic (PLEG): container finished" podID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerID="7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372" exitCode=0 Jan 05 23:27:08 crc kubenswrapper[5034]: I0105 23:27:08.045791 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn4md" event={"ID":"d7d4cd7e-0a84-400e-bc97-ab7d5c549837","Type":"ContainerDied","Data":"7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372"} Jan 05 23:27:08 crc kubenswrapper[5034]: I0105 23:27:08.297020 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d77b7579-w6pzc"] Jan 05 23:27:08 crc kubenswrapper[5034]: I0105 23:27:08.416245 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ztkxg"] Jan 05 23:27:08 crc kubenswrapper[5034]: W0105 23:27:08.426234 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb217503b_88f8_4c9c_a62a_98fa4c2708fa.slice/crio-7b304806ac800d0277969aa3cd9ea8e0fd1d1f1e26a0c03d87f0d45779f92993 WatchSource:0}: Error finding container 7b304806ac800d0277969aa3cd9ea8e0fd1d1f1e26a0c03d87f0d45779f92993: Status 404 returned error can't find the container with id 7b304806ac800d0277969aa3cd9ea8e0fd1d1f1e26a0c03d87f0d45779f92993 Jan 05 23:27:09 crc kubenswrapper[5034]: I0105 23:27:09.055182 5034 generic.go:334] "Generic (PLEG): container finished" podID="95c503a0-b4db-4e28-913a-830f750ebe0a" containerID="b99f597aad7040621f55fbadf649a86f218accafcfb6e38c31ee5c30ceb2bd31" exitCode=0 Jan 05 23:27:09 crc kubenswrapper[5034]: I0105 23:27:09.055263 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" event={"ID":"95c503a0-b4db-4e28-913a-830f750ebe0a","Type":"ContainerDied","Data":"b99f597aad7040621f55fbadf649a86f218accafcfb6e38c31ee5c30ceb2bd31"} Jan 05 23:27:09 crc kubenswrapper[5034]: I0105 23:27:09.055496 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" event={"ID":"95c503a0-b4db-4e28-913a-830f750ebe0a","Type":"ContainerStarted","Data":"1617386dc2a14c41332c144fd71a408df0e6cab58961c2b4f2586a934b08dc73"} Jan 05 23:27:09 crc kubenswrapper[5034]: I0105 23:27:09.083369 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn4md" event={"ID":"d7d4cd7e-0a84-400e-bc97-ab7d5c549837","Type":"ContainerStarted","Data":"afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d"} Jan 05 23:27:09 crc kubenswrapper[5034]: I0105 23:27:09.085505 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ztkxg" event={"ID":"b217503b-88f8-4c9c-a62a-98fa4c2708fa","Type":"ContainerStarted","Data":"cf8ee75eaa56134b0c8b6a1467c78a06c5b25ed64f87b6ac90d87ab6d8870004"} Jan 05 23:27:09 crc kubenswrapper[5034]: I0105 23:27:09.085586 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ztkxg" event={"ID":"b217503b-88f8-4c9c-a62a-98fa4c2708fa","Type":"ContainerStarted","Data":"7b304806ac800d0277969aa3cd9ea8e0fd1d1f1e26a0c03d87f0d45779f92993"} Jan 05 23:27:09 crc kubenswrapper[5034]: I0105 23:27:09.160692 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qn4md" podStartSLOduration=2.6942275799999997 podStartE2EDuration="5.160666657s" podCreationTimestamp="2026-01-05 23:27:04 +0000 UTC" firstStartedPulling="2026-01-05 23:27:06.014529471 +0000 UTC m=+5718.386528910" lastFinishedPulling="2026-01-05 23:27:08.480968548 +0000 UTC m=+5720.852967987" observedRunningTime="2026-01-05 23:27:09.133700792 +0000 UTC m=+5721.505700231" watchObservedRunningTime="2026-01-05 23:27:09.160666657 +0000 UTC m=+5721.532666096" Jan 05 23:27:09 crc kubenswrapper[5034]: I0105 23:27:09.166389 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ztkxg" podStartSLOduration=2.166369379 podStartE2EDuration="2.166369379s" podCreationTimestamp="2026-01-05 23:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:27:09.153537845 +0000 UTC m=+5721.525537284" watchObservedRunningTime="2026-01-05 23:27:09.166369379 +0000 UTC m=+5721.538368818" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.106456 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" event={"ID":"95c503a0-b4db-4e28-913a-830f750ebe0a","Type":"ContainerStarted","Data":"b9043fb2a3bbf5e57f5cc037fc18426de8acda3ae4e704dee45e76ef10bffa78"} Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.139887 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" podStartSLOduration=3.139853656 podStartE2EDuration="3.139853656s" podCreationTimestamp="2026-01-05 23:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:27:10.129658197 +0000 UTC m=+5722.501657636" watchObservedRunningTime="2026-01-05 23:27:10.139853656 +0000 UTC m=+5722.511853095" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.277483 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dblt7"] Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.281880 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.287685 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dblt7"] Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.459829 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-utilities\") pod \"redhat-operators-dblt7\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.459927 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srq5d\" (UniqueName: \"kubernetes.io/projected/465396a5-0453-4c86-816f-6e6131f7e355-kube-api-access-srq5d\") pod \"redhat-operators-dblt7\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.460022 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-catalog-content\") pod \"redhat-operators-dblt7\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.561785 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-utilities\") pod \"redhat-operators-dblt7\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.561863 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srq5d\" (UniqueName: \"kubernetes.io/projected/465396a5-0453-4c86-816f-6e6131f7e355-kube-api-access-srq5d\") pod \"redhat-operators-dblt7\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.561939 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-catalog-content\") pod \"redhat-operators-dblt7\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.562692 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-catalog-content\") pod \"redhat-operators-dblt7\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.563023 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-utilities\") pod \"redhat-operators-dblt7\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.589965 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srq5d\" (UniqueName: \"kubernetes.io/projected/465396a5-0453-4c86-816f-6e6131f7e355-kube-api-access-srq5d\") pod \"redhat-operators-dblt7\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.601137 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:10 crc kubenswrapper[5034]: I0105 23:27:10.840403 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:27:10 crc kubenswrapper[5034]: E0105 23:27:10.840906 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:27:11 crc kubenswrapper[5034]: I0105 23:27:11.120059 5034 generic.go:334] "Generic (PLEG): container finished" podID="b217503b-88f8-4c9c-a62a-98fa4c2708fa" containerID="cf8ee75eaa56134b0c8b6a1467c78a06c5b25ed64f87b6ac90d87ab6d8870004" exitCode=0 Jan 05 23:27:11 crc kubenswrapper[5034]: I0105 23:27:11.121200 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ztkxg" event={"ID":"b217503b-88f8-4c9c-a62a-98fa4c2708fa","Type":"ContainerDied","Data":"cf8ee75eaa56134b0c8b6a1467c78a06c5b25ed64f87b6ac90d87ab6d8870004"} Jan 05 23:27:11 crc kubenswrapper[5034]: I0105 23:27:11.121246 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:11 crc kubenswrapper[5034]: I0105 23:27:11.140175 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dblt7"] Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.168297 5034 generic.go:334] "Generic (PLEG): container finished" podID="465396a5-0453-4c86-816f-6e6131f7e355" containerID="ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda" exitCode=0 Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.168914 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dblt7" event={"ID":"465396a5-0453-4c86-816f-6e6131f7e355","Type":"ContainerDied","Data":"ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda"} Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.168998 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dblt7" event={"ID":"465396a5-0453-4c86-816f-6e6131f7e355","Type":"ContainerStarted","Data":"494db16199c7a3b9f1d9cbba233f7b132c1d2632b13aab2e29f0bb1c5493f54f"} Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.616731 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.820833 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-combined-ca-bundle\") pod \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.821038 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b217503b-88f8-4c9c-a62a-98fa4c2708fa-logs\") pod \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.821125 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-config-data\") pod \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.821537 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b217503b-88f8-4c9c-a62a-98fa4c2708fa-logs" (OuterVolumeSpecName: "logs") pod "b217503b-88f8-4c9c-a62a-98fa4c2708fa" (UID: "b217503b-88f8-4c9c-a62a-98fa4c2708fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.822464 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlvlh\" (UniqueName: \"kubernetes.io/projected/b217503b-88f8-4c9c-a62a-98fa4c2708fa-kube-api-access-wlvlh\") pod \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.822788 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-scripts\") pod \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\" (UID: \"b217503b-88f8-4c9c-a62a-98fa4c2708fa\") " Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.824228 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b217503b-88f8-4c9c-a62a-98fa4c2708fa-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.827220 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-scripts" (OuterVolumeSpecName: "scripts") pod "b217503b-88f8-4c9c-a62a-98fa4c2708fa" (UID: "b217503b-88f8-4c9c-a62a-98fa4c2708fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.833335 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b217503b-88f8-4c9c-a62a-98fa4c2708fa-kube-api-access-wlvlh" (OuterVolumeSpecName: "kube-api-access-wlvlh") pod "b217503b-88f8-4c9c-a62a-98fa4c2708fa" (UID: "b217503b-88f8-4c9c-a62a-98fa4c2708fa"). InnerVolumeSpecName "kube-api-access-wlvlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.855205 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-config-data" (OuterVolumeSpecName: "config-data") pod "b217503b-88f8-4c9c-a62a-98fa4c2708fa" (UID: "b217503b-88f8-4c9c-a62a-98fa4c2708fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.857284 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b217503b-88f8-4c9c-a62a-98fa4c2708fa" (UID: "b217503b-88f8-4c9c-a62a-98fa4c2708fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.928030 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.928119 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.928145 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b217503b-88f8-4c9c-a62a-98fa4c2708fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:12 crc kubenswrapper[5034]: I0105 23:27:12.928166 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlvlh\" (UniqueName: \"kubernetes.io/projected/b217503b-88f8-4c9c-a62a-98fa4c2708fa-kube-api-access-wlvlh\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.180192 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ztkxg" event={"ID":"b217503b-88f8-4c9c-a62a-98fa4c2708fa","Type":"ContainerDied","Data":"7b304806ac800d0277969aa3cd9ea8e0fd1d1f1e26a0c03d87f0d45779f92993"} Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.180272 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b304806ac800d0277969aa3cd9ea8e0fd1d1f1e26a0c03d87f0d45779f92993" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.180229 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ztkxg" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.709322 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5df86d5b5d-4gxb8"] Jan 05 23:27:13 crc kubenswrapper[5034]: E0105 23:27:13.710096 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b217503b-88f8-4c9c-a62a-98fa4c2708fa" containerName="placement-db-sync" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.710115 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="b217503b-88f8-4c9c-a62a-98fa4c2708fa" containerName="placement-db-sync" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.710315 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="b217503b-88f8-4c9c-a62a-98fa4c2708fa" containerName="placement-db-sync" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.711445 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.716924 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.717141 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.717433 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.717543 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rwh2k" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.717831 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.723036 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5df86d5b5d-4gxb8"] Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.849428 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzrm\" (UniqueName: \"kubernetes.io/projected/4be72b04-172a-4ba5-83da-ff186babfdd1-kube-api-access-vvzrm\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.849526 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-public-tls-certs\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.849571 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-combined-ca-bundle\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.849749 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-config-data\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.849890 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-scripts\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.850959 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-internal-tls-certs\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.851427 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be72b04-172a-4ba5-83da-ff186babfdd1-logs\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.953939 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-public-tls-certs\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.954033 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-combined-ca-bundle\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.954071 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-config-data\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.954126 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-scripts\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.954194 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-internal-tls-certs\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.954225 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be72b04-172a-4ba5-83da-ff186babfdd1-logs\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.954271 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvzrm\" (UniqueName: \"kubernetes.io/projected/4be72b04-172a-4ba5-83da-ff186babfdd1-kube-api-access-vvzrm\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.955741 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be72b04-172a-4ba5-83da-ff186babfdd1-logs\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.962701 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-scripts\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.963624 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-combined-ca-bundle\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.963927 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-config-data\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.964177 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-internal-tls-certs\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.968686 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be72b04-172a-4ba5-83da-ff186babfdd1-public-tls-certs\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:13 crc kubenswrapper[5034]: I0105 23:27:13.975859 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvzrm\" (UniqueName: \"kubernetes.io/projected/4be72b04-172a-4ba5-83da-ff186babfdd1-kube-api-access-vvzrm\") pod \"placement-5df86d5b5d-4gxb8\" (UID: \"4be72b04-172a-4ba5-83da-ff186babfdd1\") " pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:14 crc kubenswrapper[5034]: I0105 23:27:14.049674 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:14 crc kubenswrapper[5034]: I0105 23:27:14.192714 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dblt7" event={"ID":"465396a5-0453-4c86-816f-6e6131f7e355","Type":"ContainerStarted","Data":"4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139"} Jan 05 23:27:14 crc kubenswrapper[5034]: I0105 23:27:14.626097 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:14 crc kubenswrapper[5034]: I0105 23:27:14.626571 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:14 crc kubenswrapper[5034]: I0105 23:27:14.653091 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5df86d5b5d-4gxb8"] Jan 05 23:27:14 crc kubenswrapper[5034]: I0105 23:27:14.687255 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:15 crc kubenswrapper[5034]: I0105 23:27:15.205103 5034 generic.go:334] "Generic (PLEG): container finished" podID="465396a5-0453-4c86-816f-6e6131f7e355" containerID="4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139" exitCode=0 Jan 05 23:27:15 crc kubenswrapper[5034]: I0105 23:27:15.205207 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dblt7" event={"ID":"465396a5-0453-4c86-816f-6e6131f7e355","Type":"ContainerDied","Data":"4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139"} Jan 05 23:27:15 crc kubenswrapper[5034]: I0105 23:27:15.207811 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5df86d5b5d-4gxb8" event={"ID":"4be72b04-172a-4ba5-83da-ff186babfdd1","Type":"ContainerStarted","Data":"701b519db3d2cbf7eedcfb496d3bf7cb79af4462c5aa4f348e9166099fd36943"} Jan 05 23:27:15 crc kubenswrapper[5034]: I0105 23:27:15.207838 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5df86d5b5d-4gxb8" event={"ID":"4be72b04-172a-4ba5-83da-ff186babfdd1","Type":"ContainerStarted","Data":"83ce7d25dfcc62995bdcbb66904ea69bc285c348044c2bf3be70abc89a5ab690"} Jan 05 23:27:15 crc kubenswrapper[5034]: I0105 23:27:15.207856 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5df86d5b5d-4gxb8" event={"ID":"4be72b04-172a-4ba5-83da-ff186babfdd1","Type":"ContainerStarted","Data":"abe77432d928b5eb0abd32947f586cf7774e44d011956ecf36a3687dc18c9d40"} Jan 05 23:27:15 crc kubenswrapper[5034]: I0105 23:27:15.208393 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:15 crc kubenswrapper[5034]: I0105 23:27:15.252696 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5df86d5b5d-4gxb8" podStartSLOduration=2.252672085 podStartE2EDuration="2.252672085s" podCreationTimestamp="2026-01-05 23:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:27:15.25002198 +0000 UTC m=+5727.622021439" watchObservedRunningTime="2026-01-05 23:27:15.252672085 +0000 UTC m=+5727.624671524" Jan 05 23:27:15 crc kubenswrapper[5034]: I0105 23:27:15.263778 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:16 crc kubenswrapper[5034]: I0105 23:27:16.218860 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dblt7" event={"ID":"465396a5-0453-4c86-816f-6e6131f7e355","Type":"ContainerStarted","Data":"19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602"} Jan 05 23:27:16 crc kubenswrapper[5034]: I0105 23:27:16.219849 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:16 crc kubenswrapper[5034]: I0105 23:27:16.246822 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dblt7" podStartSLOduration=2.753157789 podStartE2EDuration="6.246794597s" podCreationTimestamp="2026-01-05 23:27:10 +0000 UTC" firstStartedPulling="2026-01-05 23:27:12.17140468 +0000 UTC m=+5724.543404119" lastFinishedPulling="2026-01-05 23:27:15.665041488 +0000 UTC m=+5728.037040927" observedRunningTime="2026-01-05 23:27:16.236039412 +0000 UTC m=+5728.608038881" watchObservedRunningTime="2026-01-05 23:27:16.246794597 +0000 UTC m=+5728.618794036" Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.080475 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qn4md"] Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.227550 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qn4md" podUID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerName="registry-server" containerID="cri-o://afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d" gracePeriod=2 Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.750720 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.798284 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.875224 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68bd7cb495-p2wxr"] Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.875782 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" podUID="dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" containerName="dnsmasq-dns" containerID="cri-o://019b8ad9abe7b40563d3f5bea32e9773a4959f4c637269f1e9224a422b41d7d5" gracePeriod=10 Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.928027 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8q6b\" (UniqueName: \"kubernetes.io/projected/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-kube-api-access-g8q6b\") pod \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.928163 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-utilities\") pod \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.928210 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-catalog-content\") pod \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\" (UID: \"d7d4cd7e-0a84-400e-bc97-ab7d5c549837\") " Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.937277 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-kube-api-access-g8q6b" (OuterVolumeSpecName: "kube-api-access-g8q6b") pod "d7d4cd7e-0a84-400e-bc97-ab7d5c549837" (UID: "d7d4cd7e-0a84-400e-bc97-ab7d5c549837"). InnerVolumeSpecName "kube-api-access-g8q6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.941234 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-utilities" (OuterVolumeSpecName: "utilities") pod "d7d4cd7e-0a84-400e-bc97-ab7d5c549837" (UID: "d7d4cd7e-0a84-400e-bc97-ab7d5c549837"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:27:17 crc kubenswrapper[5034]: I0105 23:27:17.993470 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7d4cd7e-0a84-400e-bc97-ab7d5c549837" (UID: "d7d4cd7e-0a84-400e-bc97-ab7d5c549837"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.033046 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8q6b\" (UniqueName: \"kubernetes.io/projected/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-kube-api-access-g8q6b\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.033306 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.033322 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7d4cd7e-0a84-400e-bc97-ab7d5c549837-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.243053 5034 generic.go:334] "Generic (PLEG): container finished" podID="dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" containerID="019b8ad9abe7b40563d3f5bea32e9773a4959f4c637269f1e9224a422b41d7d5" exitCode=0 Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.243122 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" event={"ID":"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180","Type":"ContainerDied","Data":"019b8ad9abe7b40563d3f5bea32e9773a4959f4c637269f1e9224a422b41d7d5"} Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.250587 5034 generic.go:334] "Generic (PLEG): container finished" podID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerID="afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d" exitCode=0 Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.250641 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn4md" event={"ID":"d7d4cd7e-0a84-400e-bc97-ab7d5c549837","Type":"ContainerDied","Data":"afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d"} Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.250684 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn4md" event={"ID":"d7d4cd7e-0a84-400e-bc97-ab7d5c549837","Type":"ContainerDied","Data":"4ef939010403c0043ebe0b0a9cda40084e2619737a15c5be49c0f6792afa8aa1"} Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.250709 5034 scope.go:117] "RemoveContainer" containerID="afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.250919 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qn4md" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.283110 5034 scope.go:117] "RemoveContainer" containerID="7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.294758 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qn4md"] Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.310057 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qn4md"] Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.316652 5034 scope.go:117] "RemoveContainer" containerID="9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.334851 5034 scope.go:117] "RemoveContainer" containerID="afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d" Jan 05 23:27:18 crc kubenswrapper[5034]: E0105 23:27:18.340342 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d\": container with ID starting with afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d not found: ID does not exist" containerID="afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.340400 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d"} err="failed to get container status \"afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d\": rpc error: code = NotFound desc = could not find container \"afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d\": container with ID starting with afef471362a349805563f2bf06e955baf23c81f33186ab2dea4b2071cc0c6d7d not found: ID does not exist" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.340431 5034 scope.go:117] "RemoveContainer" containerID="7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372" Jan 05 23:27:18 crc kubenswrapper[5034]: E0105 23:27:18.340751 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372\": container with ID starting with 7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372 not found: ID does not exist" containerID="7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.340771 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372"} err="failed to get container status \"7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372\": rpc error: code = NotFound desc = could not find container \"7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372\": container with ID starting with 7b345f67454fc0b23c1385fbba4646b8e7f844d4a94649e6baf9eeb66ab26372 not found: ID does not exist" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.340784 5034 scope.go:117] "RemoveContainer" containerID="9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00" Jan 05 23:27:18 crc kubenswrapper[5034]: E0105 23:27:18.341014 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00\": container with ID starting with 9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00 not found: ID does not exist" containerID="9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.341028 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00"} err="failed to get container status \"9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00\": rpc error: code = NotFound desc = could not find container \"9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00\": container with ID starting with 9fae1c420f4e330481d74bf0d7c2a16d3c29566ace98083bcaed4a5da7b88b00 not found: ID does not exist" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.497574 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.646759 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-sb\") pod \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.646864 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7llvv\" (UniqueName: \"kubernetes.io/projected/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-kube-api-access-7llvv\") pod \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.647119 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-nb\") pod \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.647179 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-dns-svc\") pod \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.647210 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-config\") pod \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\" (UID: \"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180\") " Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.651385 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-kube-api-access-7llvv" (OuterVolumeSpecName: "kube-api-access-7llvv") pod "dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" (UID: "dd4b73dd-10a6-451a-bcc4-f64cb7fc7180"). InnerVolumeSpecName "kube-api-access-7llvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.694365 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" (UID: "dd4b73dd-10a6-451a-bcc4-f64cb7fc7180"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.700822 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-config" (OuterVolumeSpecName: "config") pod "dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" (UID: "dd4b73dd-10a6-451a-bcc4-f64cb7fc7180"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.701405 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" (UID: "dd4b73dd-10a6-451a-bcc4-f64cb7fc7180"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.712212 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" (UID: "dd4b73dd-10a6-451a-bcc4-f64cb7fc7180"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.749331 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.749728 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.749741 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.749750 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:18 crc kubenswrapper[5034]: I0105 23:27:18.749764 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7llvv\" (UniqueName: \"kubernetes.io/projected/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180-kube-api-access-7llvv\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:19 crc kubenswrapper[5034]: I0105 23:27:19.266832 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" event={"ID":"dd4b73dd-10a6-451a-bcc4-f64cb7fc7180","Type":"ContainerDied","Data":"2dee03d72b004555c6b2840b331b2d94ba61eb5ec87e15b056f4a106a3ac8680"} Jan 05 23:27:19 crc kubenswrapper[5034]: I0105 23:27:19.266900 5034 scope.go:117] "RemoveContainer" containerID="019b8ad9abe7b40563d3f5bea32e9773a4959f4c637269f1e9224a422b41d7d5" Jan 05 23:27:19 crc kubenswrapper[5034]: I0105 23:27:19.267072 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bd7cb495-p2wxr" Jan 05 23:27:19 crc kubenswrapper[5034]: I0105 23:27:19.293013 5034 scope.go:117] "RemoveContainer" containerID="2b2821d6d7c74001485215370d3f90dc339fd45a24e60f2319d2045675228cee" Jan 05 23:27:19 crc kubenswrapper[5034]: I0105 23:27:19.320274 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68bd7cb495-p2wxr"] Jan 05 23:27:19 crc kubenswrapper[5034]: I0105 23:27:19.337880 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68bd7cb495-p2wxr"] Jan 05 23:27:19 crc kubenswrapper[5034]: I0105 23:27:19.849687 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" path="/var/lib/kubelet/pods/d7d4cd7e-0a84-400e-bc97-ab7d5c549837/volumes" Jan 05 23:27:19 crc kubenswrapper[5034]: I0105 23:27:19.850907 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" path="/var/lib/kubelet/pods/dd4b73dd-10a6-451a-bcc4-f64cb7fc7180/volumes" Jan 05 23:27:20 crc kubenswrapper[5034]: I0105 23:27:20.601632 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:20 crc kubenswrapper[5034]: I0105 23:27:20.602834 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:21 crc kubenswrapper[5034]: I0105 23:27:21.660093 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dblt7" podUID="465396a5-0453-4c86-816f-6e6131f7e355" containerName="registry-server" probeResult="failure" output=< Jan 05 23:27:21 crc kubenswrapper[5034]: timeout: failed to connect service ":50051" within 1s Jan 05 23:27:21 crc kubenswrapper[5034]: > Jan 05 23:27:22 crc kubenswrapper[5034]: I0105 23:27:22.838529 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:27:22 crc kubenswrapper[5034]: E0105 23:27:22.839176 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:27:30 crc kubenswrapper[5034]: I0105 23:27:30.669765 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:30 crc kubenswrapper[5034]: I0105 23:27:30.747025 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:30 crc kubenswrapper[5034]: I0105 23:27:30.924130 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dblt7"] Jan 05 23:27:32 crc kubenswrapper[5034]: I0105 23:27:32.386385 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dblt7" podUID="465396a5-0453-4c86-816f-6e6131f7e355" containerName="registry-server" containerID="cri-o://19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602" gracePeriod=2 Jan 05 23:27:32 crc kubenswrapper[5034]: I0105 23:27:32.865119 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:32 crc kubenswrapper[5034]: I0105 23:27:32.883294 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-utilities\") pod \"465396a5-0453-4c86-816f-6e6131f7e355\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " Jan 05 23:27:32 crc kubenswrapper[5034]: I0105 23:27:32.883418 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-catalog-content\") pod \"465396a5-0453-4c86-816f-6e6131f7e355\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " Jan 05 23:27:32 crc kubenswrapper[5034]: I0105 23:27:32.883494 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srq5d\" (UniqueName: \"kubernetes.io/projected/465396a5-0453-4c86-816f-6e6131f7e355-kube-api-access-srq5d\") pod \"465396a5-0453-4c86-816f-6e6131f7e355\" (UID: \"465396a5-0453-4c86-816f-6e6131f7e355\") " Jan 05 23:27:32 crc kubenswrapper[5034]: I0105 23:27:32.884870 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-utilities" (OuterVolumeSpecName: "utilities") pod "465396a5-0453-4c86-816f-6e6131f7e355" (UID: "465396a5-0453-4c86-816f-6e6131f7e355"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:27:32 crc kubenswrapper[5034]: I0105 23:27:32.945479 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465396a5-0453-4c86-816f-6e6131f7e355-kube-api-access-srq5d" (OuterVolumeSpecName: "kube-api-access-srq5d") pod "465396a5-0453-4c86-816f-6e6131f7e355" (UID: "465396a5-0453-4c86-816f-6e6131f7e355"). InnerVolumeSpecName "kube-api-access-srq5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:27:32 crc kubenswrapper[5034]: I0105 23:27:32.985729 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srq5d\" (UniqueName: \"kubernetes.io/projected/465396a5-0453-4c86-816f-6e6131f7e355-kube-api-access-srq5d\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:32 crc kubenswrapper[5034]: I0105 23:27:32.985790 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.010336 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "465396a5-0453-4c86-816f-6e6131f7e355" (UID: "465396a5-0453-4c86-816f-6e6131f7e355"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.087352 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465396a5-0453-4c86-816f-6e6131f7e355-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.397765 5034 generic.go:334] "Generic (PLEG): container finished" podID="465396a5-0453-4c86-816f-6e6131f7e355" containerID="19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602" exitCode=0 Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.397836 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dblt7" event={"ID":"465396a5-0453-4c86-816f-6e6131f7e355","Type":"ContainerDied","Data":"19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602"} Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.397909 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dblt7" event={"ID":"465396a5-0453-4c86-816f-6e6131f7e355","Type":"ContainerDied","Data":"494db16199c7a3b9f1d9cbba233f7b132c1d2632b13aab2e29f0bb1c5493f54f"} Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.397908 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dblt7" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.397930 5034 scope.go:117] "RemoveContainer" containerID="19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.417442 5034 scope.go:117] "RemoveContainer" containerID="4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.446762 5034 scope.go:117] "RemoveContainer" containerID="ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.454876 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dblt7"] Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.462863 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dblt7"] Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.479039 5034 scope.go:117] "RemoveContainer" containerID="19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602" Jan 05 23:27:33 crc kubenswrapper[5034]: E0105 23:27:33.479649 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602\": container with ID starting with 19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602 not found: ID does not exist" containerID="19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.479694 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602"} err="failed to get container status \"19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602\": rpc error: code = NotFound desc = could not find container \"19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602\": container with ID starting with 19102f9a5893355306924cbfcb42087774097a568e5cbba59775fa5de1c5d602 not found: ID does not exist" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.479753 5034 scope.go:117] "RemoveContainer" containerID="4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139" Jan 05 23:27:33 crc kubenswrapper[5034]: E0105 23:27:33.480153 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139\": container with ID starting with 4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139 not found: ID does not exist" containerID="4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.480187 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139"} err="failed to get container status \"4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139\": rpc error: code = NotFound desc = could not find container \"4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139\": container with ID starting with 4b68292827301296e97bd9ce26d524fd8c40b242d3137cc59289857a40572139 not found: ID does not exist" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.480213 5034 scope.go:117] "RemoveContainer" containerID="ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda" Jan 05 23:27:33 crc kubenswrapper[5034]: E0105 23:27:33.480546 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda\": container with ID starting with ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda not found: ID does not exist" containerID="ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.480571 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda"} err="failed to get container status \"ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda\": rpc error: code = NotFound desc = could not find container \"ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda\": container with ID starting with ac8fb362a0ffb19f76671869f2c2412c3841531cb65bc1db9257cfc085d2adda not found: ID does not exist" Jan 05 23:27:33 crc kubenswrapper[5034]: I0105 23:27:33.849784 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465396a5-0453-4c86-816f-6e6131f7e355" path="/var/lib/kubelet/pods/465396a5-0453-4c86-816f-6e6131f7e355/volumes" Jan 05 23:27:36 crc kubenswrapper[5034]: I0105 23:27:36.839468 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:27:36 crc kubenswrapper[5034]: E0105 23:27:36.840294 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:27:45 crc kubenswrapper[5034]: I0105 23:27:45.120908 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:45 crc kubenswrapper[5034]: I0105 23:27:45.172131 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5df86d5b5d-4gxb8" Jan 05 23:27:51 crc kubenswrapper[5034]: I0105 23:27:51.839126 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:27:51 crc kubenswrapper[5034]: E0105 23:27:51.840386 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:28:03 crc kubenswrapper[5034]: I0105 23:28:03.839052 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:28:03 crc kubenswrapper[5034]: E0105 23:28:03.840105 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.078389 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dhgbn"] Jan 05 23:28:10 crc kubenswrapper[5034]: E0105 23:28:10.080546 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465396a5-0453-4c86-816f-6e6131f7e355" containerName="extract-content" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.080664 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="465396a5-0453-4c86-816f-6e6131f7e355" containerName="extract-content" Jan 05 23:28:10 crc kubenswrapper[5034]: E0105 23:28:10.080756 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465396a5-0453-4c86-816f-6e6131f7e355" containerName="extract-utilities" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.080833 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="465396a5-0453-4c86-816f-6e6131f7e355" containerName="extract-utilities" Jan 05 23:28:10 crc kubenswrapper[5034]: E0105 23:28:10.080913 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerName="extract-utilities" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.081000 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerName="extract-utilities" Jan 05 23:28:10 crc kubenswrapper[5034]: E0105 23:28:10.081119 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465396a5-0453-4c86-816f-6e6131f7e355" containerName="registry-server" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.081220 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="465396a5-0453-4c86-816f-6e6131f7e355" containerName="registry-server" Jan 05 23:28:10 crc kubenswrapper[5034]: E0105 23:28:10.081310 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" containerName="init" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.081389 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" containerName="init" Jan 05 23:28:10 crc kubenswrapper[5034]: E0105 23:28:10.081475 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" containerName="dnsmasq-dns" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.081561 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" containerName="dnsmasq-dns" Jan 05 23:28:10 crc kubenswrapper[5034]: E0105 23:28:10.081657 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerName="extract-content" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.081752 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerName="extract-content" Jan 05 23:28:10 crc kubenswrapper[5034]: E0105 23:28:10.081832 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerName="registry-server" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.081911 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerName="registry-server" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.082261 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d4cd7e-0a84-400e-bc97-ab7d5c549837" containerName="registry-server" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.082373 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="465396a5-0453-4c86-816f-6e6131f7e355" containerName="registry-server" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.082485 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4b73dd-10a6-451a-bcc4-f64cb7fc7180" containerName="dnsmasq-dns" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.083403 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dhgbn" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.090468 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dhgbn"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.174362 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-l6mz5"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.176038 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l6mz5" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.191705 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-l6mz5"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.216120 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-869d-account-create-update-2dgbp"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.227826 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwpcd\" (UniqueName: \"kubernetes.io/projected/efdb7a79-8ad0-4b33-ada0-48c0613e3541-kube-api-access-lwpcd\") pod \"nova-api-db-create-dhgbn\" (UID: \"efdb7a79-8ad0-4b33-ada0-48c0613e3541\") " pod="openstack/nova-api-db-create-dhgbn" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.227926 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efdb7a79-8ad0-4b33-ada0-48c0613e3541-operator-scripts\") pod \"nova-api-db-create-dhgbn\" (UID: \"efdb7a79-8ad0-4b33-ada0-48c0613e3541\") " pod="openstack/nova-api-db-create-dhgbn" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.229740 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-869d-account-create-update-2dgbp" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.236182 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.251161 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-869d-account-create-update-2dgbp"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.329635 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwpcd\" (UniqueName: \"kubernetes.io/projected/efdb7a79-8ad0-4b33-ada0-48c0613e3541-kube-api-access-lwpcd\") pod \"nova-api-db-create-dhgbn\" (UID: \"efdb7a79-8ad0-4b33-ada0-48c0613e3541\") " pod="openstack/nova-api-db-create-dhgbn" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.329720 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efdb7a79-8ad0-4b33-ada0-48c0613e3541-operator-scripts\") pod \"nova-api-db-create-dhgbn\" (UID: \"efdb7a79-8ad0-4b33-ada0-48c0613e3541\") " pod="openstack/nova-api-db-create-dhgbn" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.329758 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftjc\" (UniqueName: \"kubernetes.io/projected/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-kube-api-access-lftjc\") pod \"nova-api-869d-account-create-update-2dgbp\" (UID: \"47fc0f5e-1048-4339-b2f7-7a67916d7f0e\") " pod="openstack/nova-api-869d-account-create-update-2dgbp" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.329828 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-operator-scripts\") pod \"nova-api-869d-account-create-update-2dgbp\" (UID: \"47fc0f5e-1048-4339-b2f7-7a67916d7f0e\") " pod="openstack/nova-api-869d-account-create-update-2dgbp" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.329860 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8df9b691-1416-4d79-adfb-c00f80cadac4-operator-scripts\") pod \"nova-cell0-db-create-l6mz5\" (UID: \"8df9b691-1416-4d79-adfb-c00f80cadac4\") " pod="openstack/nova-cell0-db-create-l6mz5" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.329881 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbl7b\" (UniqueName: \"kubernetes.io/projected/8df9b691-1416-4d79-adfb-c00f80cadac4-kube-api-access-lbl7b\") pod \"nova-cell0-db-create-l6mz5\" (UID: \"8df9b691-1416-4d79-adfb-c00f80cadac4\") " pod="openstack/nova-cell0-db-create-l6mz5" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.330996 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efdb7a79-8ad0-4b33-ada0-48c0613e3541-operator-scripts\") pod \"nova-api-db-create-dhgbn\" (UID: \"efdb7a79-8ad0-4b33-ada0-48c0613e3541\") " pod="openstack/nova-api-db-create-dhgbn" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.351905 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwpcd\" (UniqueName: \"kubernetes.io/projected/efdb7a79-8ad0-4b33-ada0-48c0613e3541-kube-api-access-lwpcd\") pod \"nova-api-db-create-dhgbn\" (UID: \"efdb7a79-8ad0-4b33-ada0-48c0613e3541\") " pod="openstack/nova-api-db-create-dhgbn" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.372526 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8rz48"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.373823 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8rz48" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.399728 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-15ab-account-create-update-rmxq6"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.401204 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.402894 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.414598 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dhgbn" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.424700 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8rz48"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.433487 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2483105c-df62-435f-a09c-cc9750dd2850-operator-scripts\") pod \"nova-cell1-db-create-8rz48\" (UID: \"2483105c-df62-435f-a09c-cc9750dd2850\") " pod="openstack/nova-cell1-db-create-8rz48" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.433556 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lftjc\" (UniqueName: \"kubernetes.io/projected/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-kube-api-access-lftjc\") pod \"nova-api-869d-account-create-update-2dgbp\" (UID: \"47fc0f5e-1048-4339-b2f7-7a67916d7f0e\") " pod="openstack/nova-api-869d-account-create-update-2dgbp" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.433646 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-operator-scripts\") pod \"nova-api-869d-account-create-update-2dgbp\" (UID: \"47fc0f5e-1048-4339-b2f7-7a67916d7f0e\") " pod="openstack/nova-api-869d-account-create-update-2dgbp" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.433686 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8df9b691-1416-4d79-adfb-c00f80cadac4-operator-scripts\") pod \"nova-cell0-db-create-l6mz5\" (UID: \"8df9b691-1416-4d79-adfb-c00f80cadac4\") " pod="openstack/nova-cell0-db-create-l6mz5" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.433712 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbl7b\" (UniqueName: \"kubernetes.io/projected/8df9b691-1416-4d79-adfb-c00f80cadac4-kube-api-access-lbl7b\") pod \"nova-cell0-db-create-l6mz5\" (UID: \"8df9b691-1416-4d79-adfb-c00f80cadac4\") " pod="openstack/nova-cell0-db-create-l6mz5" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.433746 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzf7\" (UniqueName: \"kubernetes.io/projected/2483105c-df62-435f-a09c-cc9750dd2850-kube-api-access-vdzf7\") pod \"nova-cell1-db-create-8rz48\" (UID: \"2483105c-df62-435f-a09c-cc9750dd2850\") " pod="openstack/nova-cell1-db-create-8rz48" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.434031 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-15ab-account-create-update-rmxq6"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.434790 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8df9b691-1416-4d79-adfb-c00f80cadac4-operator-scripts\") pod \"nova-cell0-db-create-l6mz5\" (UID: \"8df9b691-1416-4d79-adfb-c00f80cadac4\") " pod="openstack/nova-cell0-db-create-l6mz5" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.436428 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-operator-scripts\") pod \"nova-api-869d-account-create-update-2dgbp\" (UID: \"47fc0f5e-1048-4339-b2f7-7a67916d7f0e\") " pod="openstack/nova-api-869d-account-create-update-2dgbp" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.458223 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lftjc\" (UniqueName: \"kubernetes.io/projected/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-kube-api-access-lftjc\") pod \"nova-api-869d-account-create-update-2dgbp\" (UID: \"47fc0f5e-1048-4339-b2f7-7a67916d7f0e\") " pod="openstack/nova-api-869d-account-create-update-2dgbp" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.458280 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbl7b\" (UniqueName: \"kubernetes.io/projected/8df9b691-1416-4d79-adfb-c00f80cadac4-kube-api-access-lbl7b\") pod \"nova-cell0-db-create-l6mz5\" (UID: \"8df9b691-1416-4d79-adfb-c00f80cadac4\") " pod="openstack/nova-cell0-db-create-l6mz5" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.505420 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l6mz5" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.535809 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-operator-scripts\") pod \"nova-cell0-15ab-account-create-update-rmxq6\" (UID: \"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5\") " pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.535971 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r2r5\" (UniqueName: \"kubernetes.io/projected/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-kube-api-access-6r2r5\") pod \"nova-cell0-15ab-account-create-update-rmxq6\" (UID: \"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5\") " pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.536047 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdzf7\" (UniqueName: \"kubernetes.io/projected/2483105c-df62-435f-a09c-cc9750dd2850-kube-api-access-vdzf7\") pod \"nova-cell1-db-create-8rz48\" (UID: \"2483105c-df62-435f-a09c-cc9750dd2850\") " pod="openstack/nova-cell1-db-create-8rz48" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.536188 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2483105c-df62-435f-a09c-cc9750dd2850-operator-scripts\") pod \"nova-cell1-db-create-8rz48\" (UID: \"2483105c-df62-435f-a09c-cc9750dd2850\") " pod="openstack/nova-cell1-db-create-8rz48" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.537238 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2483105c-df62-435f-a09c-cc9750dd2850-operator-scripts\") pod \"nova-cell1-db-create-8rz48\" (UID: \"2483105c-df62-435f-a09c-cc9750dd2850\") " pod="openstack/nova-cell1-db-create-8rz48" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.560149 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-869d-account-create-update-2dgbp" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.578497 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdzf7\" (UniqueName: \"kubernetes.io/projected/2483105c-df62-435f-a09c-cc9750dd2850-kube-api-access-vdzf7\") pod \"nova-cell1-db-create-8rz48\" (UID: \"2483105c-df62-435f-a09c-cc9750dd2850\") " pod="openstack/nova-cell1-db-create-8rz48" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.612443 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7c86-account-create-update-plzz9"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.613904 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7c86-account-create-update-plzz9" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.617531 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.631209 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7c86-account-create-update-plzz9"] Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.639162 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r2r5\" (UniqueName: \"kubernetes.io/projected/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-kube-api-access-6r2r5\") pod \"nova-cell0-15ab-account-create-update-rmxq6\" (UID: \"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5\") " pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.639774 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-operator-scripts\") pod \"nova-cell0-15ab-account-create-update-rmxq6\" (UID: \"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5\") " pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.640927 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-operator-scripts\") pod \"nova-cell0-15ab-account-create-update-rmxq6\" (UID: \"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5\") " pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.664302 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r2r5\" (UniqueName: \"kubernetes.io/projected/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-kube-api-access-6r2r5\") pod \"nova-cell0-15ab-account-create-update-rmxq6\" (UID: \"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5\") " pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.713409 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8rz48" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.723412 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.742114 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7kd\" (UniqueName: \"kubernetes.io/projected/89dfae35-d17d-42c1-b717-0bb03abf7fc7-kube-api-access-6r7kd\") pod \"nova-cell1-7c86-account-create-update-plzz9\" (UID: \"89dfae35-d17d-42c1-b717-0bb03abf7fc7\") " pod="openstack/nova-cell1-7c86-account-create-update-plzz9" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.742207 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dfae35-d17d-42c1-b717-0bb03abf7fc7-operator-scripts\") pod \"nova-cell1-7c86-account-create-update-plzz9\" (UID: \"89dfae35-d17d-42c1-b717-0bb03abf7fc7\") " pod="openstack/nova-cell1-7c86-account-create-update-plzz9" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.844296 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r7kd\" (UniqueName: \"kubernetes.io/projected/89dfae35-d17d-42c1-b717-0bb03abf7fc7-kube-api-access-6r7kd\") pod \"nova-cell1-7c86-account-create-update-plzz9\" (UID: \"89dfae35-d17d-42c1-b717-0bb03abf7fc7\") " pod="openstack/nova-cell1-7c86-account-create-update-plzz9" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.844680 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dfae35-d17d-42c1-b717-0bb03abf7fc7-operator-scripts\") pod \"nova-cell1-7c86-account-create-update-plzz9\" (UID: \"89dfae35-d17d-42c1-b717-0bb03abf7fc7\") " pod="openstack/nova-cell1-7c86-account-create-update-plzz9" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.845517 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dfae35-d17d-42c1-b717-0bb03abf7fc7-operator-scripts\") pod \"nova-cell1-7c86-account-create-update-plzz9\" (UID: \"89dfae35-d17d-42c1-b717-0bb03abf7fc7\") " pod="openstack/nova-cell1-7c86-account-create-update-plzz9" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.867122 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r7kd\" (UniqueName: \"kubernetes.io/projected/89dfae35-d17d-42c1-b717-0bb03abf7fc7-kube-api-access-6r7kd\") pod \"nova-cell1-7c86-account-create-update-plzz9\" (UID: \"89dfae35-d17d-42c1-b717-0bb03abf7fc7\") " pod="openstack/nova-cell1-7c86-account-create-update-plzz9" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.960996 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7c86-account-create-update-plzz9" Jan 05 23:28:10 crc kubenswrapper[5034]: I0105 23:28:10.974608 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dhgbn"] Jan 05 23:28:11 crc kubenswrapper[5034]: W0105 23:28:11.043718 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefdb7a79_8ad0_4b33_ada0_48c0613e3541.slice/crio-cdc596835e4012f3d8a0cf15506728b33791e7abce8f2b37e9bb9e7afee744ef WatchSource:0}: Error finding container cdc596835e4012f3d8a0cf15506728b33791e7abce8f2b37e9bb9e7afee744ef: Status 404 returned error can't find the container with id cdc596835e4012f3d8a0cf15506728b33791e7abce8f2b37e9bb9e7afee744ef Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.165994 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-l6mz5"] Jan 05 23:28:11 crc kubenswrapper[5034]: W0105 23:28:11.188306 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47fc0f5e_1048_4339_b2f7_7a67916d7f0e.slice/crio-982f93e75441970c5f2fa3d2630c870df363f3babf45a8b0ec7ffb07929af260 WatchSource:0}: Error finding container 982f93e75441970c5f2fa3d2630c870df363f3babf45a8b0ec7ffb07929af260: Status 404 returned error can't find the container with id 982f93e75441970c5f2fa3d2630c870df363f3babf45a8b0ec7ffb07929af260 Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.199129 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-869d-account-create-update-2dgbp"] Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.361362 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8rz48"] Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.368808 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-15ab-account-create-update-rmxq6"] Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.514100 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7c86-account-create-update-plzz9"] Jan 05 23:28:11 crc kubenswrapper[5034]: W0105 23:28:11.519235 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89dfae35_d17d_42c1_b717_0bb03abf7fc7.slice/crio-01ea30e718caff960e381d0989220addb4fa9f3b2d110293b89a6c12d48f0049 WatchSource:0}: Error finding container 01ea30e718caff960e381d0989220addb4fa9f3b2d110293b89a6c12d48f0049: Status 404 returned error can't find the container with id 01ea30e718caff960e381d0989220addb4fa9f3b2d110293b89a6c12d48f0049 Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.764389 5034 generic.go:334] "Generic (PLEG): container finished" podID="47fc0f5e-1048-4339-b2f7-7a67916d7f0e" containerID="b7f7dcd84c32b6c4ec88f53ca3f696f3c8d5058296f420178458415be61281a4" exitCode=0 Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.764478 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-869d-account-create-update-2dgbp" event={"ID":"47fc0f5e-1048-4339-b2f7-7a67916d7f0e","Type":"ContainerDied","Data":"b7f7dcd84c32b6c4ec88f53ca3f696f3c8d5058296f420178458415be61281a4"} Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.764894 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-869d-account-create-update-2dgbp" event={"ID":"47fc0f5e-1048-4339-b2f7-7a67916d7f0e","Type":"ContainerStarted","Data":"982f93e75441970c5f2fa3d2630c870df363f3babf45a8b0ec7ffb07929af260"} Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.766335 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7c86-account-create-update-plzz9" event={"ID":"89dfae35-d17d-42c1-b717-0bb03abf7fc7","Type":"ContainerStarted","Data":"01ea30e718caff960e381d0989220addb4fa9f3b2d110293b89a6c12d48f0049"} Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.769286 5034 generic.go:334] "Generic (PLEG): container finished" podID="efdb7a79-8ad0-4b33-ada0-48c0613e3541" containerID="ac4d50feb7e948f9e4538756e93eb7eeec4d816a870a410eacc61514b7dfdaa0" exitCode=0 Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.769380 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dhgbn" event={"ID":"efdb7a79-8ad0-4b33-ada0-48c0613e3541","Type":"ContainerDied","Data":"ac4d50feb7e948f9e4538756e93eb7eeec4d816a870a410eacc61514b7dfdaa0"} Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.769453 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dhgbn" event={"ID":"efdb7a79-8ad0-4b33-ada0-48c0613e3541","Type":"ContainerStarted","Data":"cdc596835e4012f3d8a0cf15506728b33791e7abce8f2b37e9bb9e7afee744ef"} Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.770800 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" event={"ID":"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5","Type":"ContainerStarted","Data":"859011dbd61e0fffb0968af9be2bf7d7a99d2b79846360a075eb8173716f5136"} Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.772431 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8rz48" event={"ID":"2483105c-df62-435f-a09c-cc9750dd2850","Type":"ContainerStarted","Data":"0fc58203e7d45af98c3b2ac97d02ff11e7cc7f768de5e27d1001e77629575cd8"} Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.774222 5034 generic.go:334] "Generic (PLEG): container finished" podID="8df9b691-1416-4d79-adfb-c00f80cadac4" containerID="de5ae22f45551f2ab1cc85f4ccaac35775024dbf85d9c3e663b530330d4a21fc" exitCode=0 Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.774326 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l6mz5" event={"ID":"8df9b691-1416-4d79-adfb-c00f80cadac4","Type":"ContainerDied","Data":"de5ae22f45551f2ab1cc85f4ccaac35775024dbf85d9c3e663b530330d4a21fc"} Jan 05 23:28:11 crc kubenswrapper[5034]: I0105 23:28:11.774484 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l6mz5" event={"ID":"8df9b691-1416-4d79-adfb-c00f80cadac4","Type":"ContainerStarted","Data":"b0b38c269b31676a3244ce97bf0a8380350142bcee845323b7b41091655fb907"} Jan 05 23:28:12 crc kubenswrapper[5034]: I0105 23:28:12.788974 5034 generic.go:334] "Generic (PLEG): container finished" podID="2483105c-df62-435f-a09c-cc9750dd2850" containerID="f01fc3757f912a171babb5367dc03dfbdb4ef64762c8126064c3783f35183408" exitCode=0 Jan 05 23:28:12 crc kubenswrapper[5034]: I0105 23:28:12.789895 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8rz48" event={"ID":"2483105c-df62-435f-a09c-cc9750dd2850","Type":"ContainerDied","Data":"f01fc3757f912a171babb5367dc03dfbdb4ef64762c8126064c3783f35183408"} Jan 05 23:28:12 crc kubenswrapper[5034]: I0105 23:28:12.796736 5034 generic.go:334] "Generic (PLEG): container finished" podID="89dfae35-d17d-42c1-b717-0bb03abf7fc7" containerID="dbd11c0dd9721aa53c2c4d6b066a49c79479cd418e5fb16e44376b6e5c2ba099" exitCode=0 Jan 05 23:28:12 crc kubenswrapper[5034]: I0105 23:28:12.796846 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7c86-account-create-update-plzz9" event={"ID":"89dfae35-d17d-42c1-b717-0bb03abf7fc7","Type":"ContainerDied","Data":"dbd11c0dd9721aa53c2c4d6b066a49c79479cd418e5fb16e44376b6e5c2ba099"} Jan 05 23:28:12 crc kubenswrapper[5034]: I0105 23:28:12.799160 5034 generic.go:334] "Generic (PLEG): container finished" podID="3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5" containerID="86b4a482be6ced31dbb0f1e645bdccf50fe0a4708e4a12404b9ed238a9e90f0f" exitCode=0 Jan 05 23:28:12 crc kubenswrapper[5034]: I0105 23:28:12.799320 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" event={"ID":"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5","Type":"ContainerDied","Data":"86b4a482be6ced31dbb0f1e645bdccf50fe0a4708e4a12404b9ed238a9e90f0f"} Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.289996 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dhgbn" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.296636 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l6mz5" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.300728 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-869d-account-create-update-2dgbp" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.408565 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbl7b\" (UniqueName: \"kubernetes.io/projected/8df9b691-1416-4d79-adfb-c00f80cadac4-kube-api-access-lbl7b\") pod \"8df9b691-1416-4d79-adfb-c00f80cadac4\" (UID: \"8df9b691-1416-4d79-adfb-c00f80cadac4\") " Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.408664 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lftjc\" (UniqueName: \"kubernetes.io/projected/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-kube-api-access-lftjc\") pod \"47fc0f5e-1048-4339-b2f7-7a67916d7f0e\" (UID: \"47fc0f5e-1048-4339-b2f7-7a67916d7f0e\") " Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.408712 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8df9b691-1416-4d79-adfb-c00f80cadac4-operator-scripts\") pod \"8df9b691-1416-4d79-adfb-c00f80cadac4\" (UID: \"8df9b691-1416-4d79-adfb-c00f80cadac4\") " Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.408740 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efdb7a79-8ad0-4b33-ada0-48c0613e3541-operator-scripts\") pod \"efdb7a79-8ad0-4b33-ada0-48c0613e3541\" (UID: \"efdb7a79-8ad0-4b33-ada0-48c0613e3541\") " Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.408883 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-operator-scripts\") pod \"47fc0f5e-1048-4339-b2f7-7a67916d7f0e\" (UID: \"47fc0f5e-1048-4339-b2f7-7a67916d7f0e\") " Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.408951 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwpcd\" (UniqueName: \"kubernetes.io/projected/efdb7a79-8ad0-4b33-ada0-48c0613e3541-kube-api-access-lwpcd\") pod \"efdb7a79-8ad0-4b33-ada0-48c0613e3541\" (UID: \"efdb7a79-8ad0-4b33-ada0-48c0613e3541\") " Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.409675 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8df9b691-1416-4d79-adfb-c00f80cadac4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8df9b691-1416-4d79-adfb-c00f80cadac4" (UID: "8df9b691-1416-4d79-adfb-c00f80cadac4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.409731 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efdb7a79-8ad0-4b33-ada0-48c0613e3541-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efdb7a79-8ad0-4b33-ada0-48c0613e3541" (UID: "efdb7a79-8ad0-4b33-ada0-48c0613e3541"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.410056 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47fc0f5e-1048-4339-b2f7-7a67916d7f0e" (UID: "47fc0f5e-1048-4339-b2f7-7a67916d7f0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.415451 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdb7a79-8ad0-4b33-ada0-48c0613e3541-kube-api-access-lwpcd" (OuterVolumeSpecName: "kube-api-access-lwpcd") pod "efdb7a79-8ad0-4b33-ada0-48c0613e3541" (UID: "efdb7a79-8ad0-4b33-ada0-48c0613e3541"). InnerVolumeSpecName "kube-api-access-lwpcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.415912 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-kube-api-access-lftjc" (OuterVolumeSpecName: "kube-api-access-lftjc") pod "47fc0f5e-1048-4339-b2f7-7a67916d7f0e" (UID: "47fc0f5e-1048-4339-b2f7-7a67916d7f0e"). InnerVolumeSpecName "kube-api-access-lftjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.417638 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df9b691-1416-4d79-adfb-c00f80cadac4-kube-api-access-lbl7b" (OuterVolumeSpecName: "kube-api-access-lbl7b") pod "8df9b691-1416-4d79-adfb-c00f80cadac4" (UID: "8df9b691-1416-4d79-adfb-c00f80cadac4"). InnerVolumeSpecName "kube-api-access-lbl7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.511223 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.511256 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwpcd\" (UniqueName: \"kubernetes.io/projected/efdb7a79-8ad0-4b33-ada0-48c0613e3541-kube-api-access-lwpcd\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.511267 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbl7b\" (UniqueName: \"kubernetes.io/projected/8df9b691-1416-4d79-adfb-c00f80cadac4-kube-api-access-lbl7b\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.511279 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lftjc\" (UniqueName: \"kubernetes.io/projected/47fc0f5e-1048-4339-b2f7-7a67916d7f0e-kube-api-access-lftjc\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.511288 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8df9b691-1416-4d79-adfb-c00f80cadac4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.511296 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efdb7a79-8ad0-4b33-ada0-48c0613e3541-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.809843 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dhgbn" event={"ID":"efdb7a79-8ad0-4b33-ada0-48c0613e3541","Type":"ContainerDied","Data":"cdc596835e4012f3d8a0cf15506728b33791e7abce8f2b37e9bb9e7afee744ef"} Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.810284 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdc596835e4012f3d8a0cf15506728b33791e7abce8f2b37e9bb9e7afee744ef" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.810119 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dhgbn" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.811347 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l6mz5" event={"ID":"8df9b691-1416-4d79-adfb-c00f80cadac4","Type":"ContainerDied","Data":"b0b38c269b31676a3244ce97bf0a8380350142bcee845323b7b41091655fb907"} Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.811374 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0b38c269b31676a3244ce97bf0a8380350142bcee845323b7b41091655fb907" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.811428 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l6mz5" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.813246 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-869d-account-create-update-2dgbp" Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.814163 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-869d-account-create-update-2dgbp" event={"ID":"47fc0f5e-1048-4339-b2f7-7a67916d7f0e","Type":"ContainerDied","Data":"982f93e75441970c5f2fa3d2630c870df363f3babf45a8b0ec7ffb07929af260"} Jan 05 23:28:13 crc kubenswrapper[5034]: I0105 23:28:13.814190 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="982f93e75441970c5f2fa3d2630c870df363f3babf45a8b0ec7ffb07929af260" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.830779 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7c86-account-create-update-plzz9" event={"ID":"89dfae35-d17d-42c1-b717-0bb03abf7fc7","Type":"ContainerDied","Data":"01ea30e718caff960e381d0989220addb4fa9f3b2d110293b89a6c12d48f0049"} Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.831623 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01ea30e718caff960e381d0989220addb4fa9f3b2d110293b89a6c12d48f0049" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.832876 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" event={"ID":"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5","Type":"ContainerDied","Data":"859011dbd61e0fffb0968af9be2bf7d7a99d2b79846360a075eb8173716f5136"} Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.832937 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="859011dbd61e0fffb0968af9be2bf7d7a99d2b79846360a075eb8173716f5136" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.834577 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8rz48" event={"ID":"2483105c-df62-435f-a09c-cc9750dd2850","Type":"ContainerDied","Data":"0fc58203e7d45af98c3b2ac97d02ff11e7cc7f768de5e27d1001e77629575cd8"} Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.834603 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fc58203e7d45af98c3b2ac97d02ff11e7cc7f768de5e27d1001e77629575cd8" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.836575 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.848546 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7c86-account-create-update-plzz9" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.858877 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8rz48" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.992798 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-operator-scripts\") pod \"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5\" (UID: \"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5\") " Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.993016 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdzf7\" (UniqueName: \"kubernetes.io/projected/2483105c-df62-435f-a09c-cc9750dd2850-kube-api-access-vdzf7\") pod \"2483105c-df62-435f-a09c-cc9750dd2850\" (UID: \"2483105c-df62-435f-a09c-cc9750dd2850\") " Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.993561 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5" (UID: "3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.993844 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dfae35-d17d-42c1-b717-0bb03abf7fc7-operator-scripts\") pod \"89dfae35-d17d-42c1-b717-0bb03abf7fc7\" (UID: \"89dfae35-d17d-42c1-b717-0bb03abf7fc7\") " Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.993875 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r2r5\" (UniqueName: \"kubernetes.io/projected/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-kube-api-access-6r2r5\") pod \"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5\" (UID: \"3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5\") " Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.993957 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r7kd\" (UniqueName: \"kubernetes.io/projected/89dfae35-d17d-42c1-b717-0bb03abf7fc7-kube-api-access-6r7kd\") pod \"89dfae35-d17d-42c1-b717-0bb03abf7fc7\" (UID: \"89dfae35-d17d-42c1-b717-0bb03abf7fc7\") " Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.993988 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2483105c-df62-435f-a09c-cc9750dd2850-operator-scripts\") pod \"2483105c-df62-435f-a09c-cc9750dd2850\" (UID: \"2483105c-df62-435f-a09c-cc9750dd2850\") " Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.994739 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.994907 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89dfae35-d17d-42c1-b717-0bb03abf7fc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89dfae35-d17d-42c1-b717-0bb03abf7fc7" (UID: "89dfae35-d17d-42c1-b717-0bb03abf7fc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.995008 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2483105c-df62-435f-a09c-cc9750dd2850-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2483105c-df62-435f-a09c-cc9750dd2850" (UID: "2483105c-df62-435f-a09c-cc9750dd2850"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:28:14 crc kubenswrapper[5034]: I0105 23:28:14.999541 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2483105c-df62-435f-a09c-cc9750dd2850-kube-api-access-vdzf7" (OuterVolumeSpecName: "kube-api-access-vdzf7") pod "2483105c-df62-435f-a09c-cc9750dd2850" (UID: "2483105c-df62-435f-a09c-cc9750dd2850"). InnerVolumeSpecName "kube-api-access-vdzf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:15 crc kubenswrapper[5034]: I0105 23:28:15.000889 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-kube-api-access-6r2r5" (OuterVolumeSpecName: "kube-api-access-6r2r5") pod "3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5" (UID: "3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5"). InnerVolumeSpecName "kube-api-access-6r2r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:15 crc kubenswrapper[5034]: I0105 23:28:15.001460 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89dfae35-d17d-42c1-b717-0bb03abf7fc7-kube-api-access-6r7kd" (OuterVolumeSpecName: "kube-api-access-6r7kd") pod "89dfae35-d17d-42c1-b717-0bb03abf7fc7" (UID: "89dfae35-d17d-42c1-b717-0bb03abf7fc7"). InnerVolumeSpecName "kube-api-access-6r7kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:15 crc kubenswrapper[5034]: I0105 23:28:15.097132 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdzf7\" (UniqueName: \"kubernetes.io/projected/2483105c-df62-435f-a09c-cc9750dd2850-kube-api-access-vdzf7\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:15 crc kubenswrapper[5034]: I0105 23:28:15.097185 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dfae35-d17d-42c1-b717-0bb03abf7fc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:15 crc kubenswrapper[5034]: I0105 23:28:15.097203 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r2r5\" (UniqueName: \"kubernetes.io/projected/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5-kube-api-access-6r2r5\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:15 crc kubenswrapper[5034]: I0105 23:28:15.097215 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r7kd\" (UniqueName: \"kubernetes.io/projected/89dfae35-d17d-42c1-b717-0bb03abf7fc7-kube-api-access-6r7kd\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:15 crc kubenswrapper[5034]: I0105 23:28:15.097228 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2483105c-df62-435f-a09c-cc9750dd2850-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:15 crc kubenswrapper[5034]: I0105 23:28:15.844023 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7c86-account-create-update-plzz9" Jan 05 23:28:15 crc kubenswrapper[5034]: I0105 23:28:15.844044 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-15ab-account-create-update-rmxq6" Jan 05 23:28:15 crc kubenswrapper[5034]: I0105 23:28:15.844063 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8rz48" Jan 05 23:28:18 crc kubenswrapper[5034]: I0105 23:28:18.838299 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:28:18 crc kubenswrapper[5034]: E0105 23:28:18.839110 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.821645 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cqpdk"] Jan 05 23:28:20 crc kubenswrapper[5034]: E0105 23:28:20.824937 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483105c-df62-435f-a09c-cc9750dd2850" containerName="mariadb-database-create" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.825123 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483105c-df62-435f-a09c-cc9750dd2850" containerName="mariadb-database-create" Jan 05 23:28:20 crc kubenswrapper[5034]: E0105 23:28:20.825270 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df9b691-1416-4d79-adfb-c00f80cadac4" containerName="mariadb-database-create" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.825385 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df9b691-1416-4d79-adfb-c00f80cadac4" containerName="mariadb-database-create" Jan 05 23:28:20 crc kubenswrapper[5034]: E0105 23:28:20.825503 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89dfae35-d17d-42c1-b717-0bb03abf7fc7" containerName="mariadb-account-create-update" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.825611 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="89dfae35-d17d-42c1-b717-0bb03abf7fc7" containerName="mariadb-account-create-update" Jan 05 23:28:20 crc kubenswrapper[5034]: E0105 23:28:20.825730 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5" containerName="mariadb-account-create-update" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.825831 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5" containerName="mariadb-account-create-update" Jan 05 23:28:20 crc kubenswrapper[5034]: E0105 23:28:20.825947 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdb7a79-8ad0-4b33-ada0-48c0613e3541" containerName="mariadb-database-create" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.826061 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdb7a79-8ad0-4b33-ada0-48c0613e3541" containerName="mariadb-database-create" Jan 05 23:28:20 crc kubenswrapper[5034]: E0105 23:28:20.826228 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47fc0f5e-1048-4339-b2f7-7a67916d7f0e" containerName="mariadb-account-create-update" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.826345 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="47fc0f5e-1048-4339-b2f7-7a67916d7f0e" containerName="mariadb-account-create-update" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.826775 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="2483105c-df62-435f-a09c-cc9750dd2850" containerName="mariadb-database-create" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.826906 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="efdb7a79-8ad0-4b33-ada0-48c0613e3541" containerName="mariadb-database-create" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.827019 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="89dfae35-d17d-42c1-b717-0bb03abf7fc7" containerName="mariadb-account-create-update" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.827176 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5" containerName="mariadb-account-create-update" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.827291 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df9b691-1416-4d79-adfb-c00f80cadac4" containerName="mariadb-database-create" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.827398 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="47fc0f5e-1048-4339-b2f7-7a67916d7f0e" containerName="mariadb-account-create-update" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.828578 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.831112 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.831800 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-scripts\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.831898 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.832113 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n7qd\" (UniqueName: \"kubernetes.io/projected/09b12947-70bd-491c-99ad-28221fa1f2a2-kube-api-access-2n7qd\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.832208 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.832351 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-config-data\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.835648 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m72lm" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.839826 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cqpdk"] Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.934596 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-scripts\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.934664 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.934757 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n7qd\" (UniqueName: \"kubernetes.io/projected/09b12947-70bd-491c-99ad-28221fa1f2a2-kube-api-access-2n7qd\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.934874 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-config-data\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.955794 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-scripts\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.956041 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.956184 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-config-data\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:20 crc kubenswrapper[5034]: I0105 23:28:20.966280 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n7qd\" (UniqueName: \"kubernetes.io/projected/09b12947-70bd-491c-99ad-28221fa1f2a2-kube-api-access-2n7qd\") pod \"nova-cell0-conductor-db-sync-cqpdk\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:21 crc kubenswrapper[5034]: I0105 23:28:21.160754 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:21 crc kubenswrapper[5034]: I0105 23:28:21.688105 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cqpdk"] Jan 05 23:28:21 crc kubenswrapper[5034]: I0105 23:28:21.905603 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cqpdk" event={"ID":"09b12947-70bd-491c-99ad-28221fa1f2a2","Type":"ContainerStarted","Data":"c25494f7684e1b2ec94aa151c470e5fafefe28be1d7b8f276a2f6e448fdad32f"} Jan 05 23:28:22 crc kubenswrapper[5034]: I0105 23:28:22.917203 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cqpdk" event={"ID":"09b12947-70bd-491c-99ad-28221fa1f2a2","Type":"ContainerStarted","Data":"dccb0bee25d2718cedd5ce370849106c0293a3e55c45f5a1333324b543e8e571"} Jan 05 23:28:22 crc kubenswrapper[5034]: I0105 23:28:22.944862 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-cqpdk" podStartSLOduration=2.944838961 podStartE2EDuration="2.944838961s" podCreationTimestamp="2026-01-05 23:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:22.937304918 +0000 UTC m=+5795.309304357" watchObservedRunningTime="2026-01-05 23:28:22.944838961 +0000 UTC m=+5795.316838400" Jan 05 23:28:27 crc kubenswrapper[5034]: I0105 23:28:27.977311 5034 generic.go:334] "Generic (PLEG): container finished" podID="09b12947-70bd-491c-99ad-28221fa1f2a2" containerID="dccb0bee25d2718cedd5ce370849106c0293a3e55c45f5a1333324b543e8e571" exitCode=0 Jan 05 23:28:27 crc kubenswrapper[5034]: I0105 23:28:27.977462 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cqpdk" event={"ID":"09b12947-70bd-491c-99ad-28221fa1f2a2","Type":"ContainerDied","Data":"dccb0bee25d2718cedd5ce370849106c0293a3e55c45f5a1333324b543e8e571"} Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.328806 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.418760 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n7qd\" (UniqueName: \"kubernetes.io/projected/09b12947-70bd-491c-99ad-28221fa1f2a2-kube-api-access-2n7qd\") pod \"09b12947-70bd-491c-99ad-28221fa1f2a2\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.419065 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-combined-ca-bundle\") pod \"09b12947-70bd-491c-99ad-28221fa1f2a2\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.419125 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-config-data\") pod \"09b12947-70bd-491c-99ad-28221fa1f2a2\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.419168 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-scripts\") pod \"09b12947-70bd-491c-99ad-28221fa1f2a2\" (UID: \"09b12947-70bd-491c-99ad-28221fa1f2a2\") " Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.428347 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b12947-70bd-491c-99ad-28221fa1f2a2-kube-api-access-2n7qd" (OuterVolumeSpecName: "kube-api-access-2n7qd") pod "09b12947-70bd-491c-99ad-28221fa1f2a2" (UID: "09b12947-70bd-491c-99ad-28221fa1f2a2"). InnerVolumeSpecName "kube-api-access-2n7qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.432365 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-scripts" (OuterVolumeSpecName: "scripts") pod "09b12947-70bd-491c-99ad-28221fa1f2a2" (UID: "09b12947-70bd-491c-99ad-28221fa1f2a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.452980 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09b12947-70bd-491c-99ad-28221fa1f2a2" (UID: "09b12947-70bd-491c-99ad-28221fa1f2a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.476811 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-config-data" (OuterVolumeSpecName: "config-data") pod "09b12947-70bd-491c-99ad-28221fa1f2a2" (UID: "09b12947-70bd-491c-99ad-28221fa1f2a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.521349 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n7qd\" (UniqueName: \"kubernetes.io/projected/09b12947-70bd-491c-99ad-28221fa1f2a2-kube-api-access-2n7qd\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.521401 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.521416 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:29 crc kubenswrapper[5034]: I0105 23:28:29.521426 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b12947-70bd-491c-99ad-28221fa1f2a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.003364 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cqpdk" event={"ID":"09b12947-70bd-491c-99ad-28221fa1f2a2","Type":"ContainerDied","Data":"c25494f7684e1b2ec94aa151c470e5fafefe28be1d7b8f276a2f6e448fdad32f"} Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.003433 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c25494f7684e1b2ec94aa151c470e5fafefe28be1d7b8f276a2f6e448fdad32f" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.003765 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cqpdk" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.097612 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 23:28:30 crc kubenswrapper[5034]: E0105 23:28:30.099149 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b12947-70bd-491c-99ad-28221fa1f2a2" containerName="nova-cell0-conductor-db-sync" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.099247 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b12947-70bd-491c-99ad-28221fa1f2a2" containerName="nova-cell0-conductor-db-sync" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.099810 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b12947-70bd-491c-99ad-28221fa1f2a2" containerName="nova-cell0-conductor-db-sync" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.101050 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.103285 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m72lm" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.112342 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.121318 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.242786 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2b754d-ffa3-4818-8e7e-519696b826fd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af2b754d-ffa3-4818-8e7e-519696b826fd\") " pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.242855 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2b754d-ffa3-4818-8e7e-519696b826fd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af2b754d-ffa3-4818-8e7e-519696b826fd\") " pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.242895 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdm5b\" (UniqueName: \"kubernetes.io/projected/af2b754d-ffa3-4818-8e7e-519696b826fd-kube-api-access-jdm5b\") pod \"nova-cell0-conductor-0\" (UID: \"af2b754d-ffa3-4818-8e7e-519696b826fd\") " pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.345049 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2b754d-ffa3-4818-8e7e-519696b826fd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af2b754d-ffa3-4818-8e7e-519696b826fd\") " pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.345156 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2b754d-ffa3-4818-8e7e-519696b826fd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af2b754d-ffa3-4818-8e7e-519696b826fd\") " pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.345207 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdm5b\" (UniqueName: \"kubernetes.io/projected/af2b754d-ffa3-4818-8e7e-519696b826fd-kube-api-access-jdm5b\") pod \"nova-cell0-conductor-0\" (UID: \"af2b754d-ffa3-4818-8e7e-519696b826fd\") " pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.352690 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2b754d-ffa3-4818-8e7e-519696b826fd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af2b754d-ffa3-4818-8e7e-519696b826fd\") " pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.353461 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2b754d-ffa3-4818-8e7e-519696b826fd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af2b754d-ffa3-4818-8e7e-519696b826fd\") " pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.370885 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdm5b\" (UniqueName: \"kubernetes.io/projected/af2b754d-ffa3-4818-8e7e-519696b826fd-kube-api-access-jdm5b\") pod \"nova-cell0-conductor-0\" (UID: \"af2b754d-ffa3-4818-8e7e-519696b826fd\") " pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:30 crc kubenswrapper[5034]: I0105 23:28:30.435635 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:31 crc kubenswrapper[5034]: I0105 23:28:31.047829 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 23:28:32 crc kubenswrapper[5034]: I0105 23:28:32.035958 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"af2b754d-ffa3-4818-8e7e-519696b826fd","Type":"ContainerStarted","Data":"242556d483f87cc6644eaaed918df56ac04cd4ddc4025bf7051ce2e0af0cef39"} Jan 05 23:28:32 crc kubenswrapper[5034]: I0105 23:28:32.036363 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"af2b754d-ffa3-4818-8e7e-519696b826fd","Type":"ContainerStarted","Data":"81d4f7be3ae7bcf7bef1972ae4746cd17b1444cfa31901ad5adabb5f46c634bd"} Jan 05 23:28:32 crc kubenswrapper[5034]: I0105 23:28:32.036414 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:32 crc kubenswrapper[5034]: I0105 23:28:32.060901 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.060882609 podStartE2EDuration="2.060882609s" podCreationTimestamp="2026-01-05 23:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:32.056006411 +0000 UTC m=+5804.428005850" watchObservedRunningTime="2026-01-05 23:28:32.060882609 +0000 UTC m=+5804.432882038" Jan 05 23:28:32 crc kubenswrapper[5034]: I0105 23:28:32.838971 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:28:32 crc kubenswrapper[5034]: E0105 23:28:32.839634 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:28:40 crc kubenswrapper[5034]: I0105 23:28:40.464662 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 05 23:28:40 crc kubenswrapper[5034]: I0105 23:28:40.920849 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-m46xl"] Jan 05 23:28:40 crc kubenswrapper[5034]: I0105 23:28:40.922223 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:40 crc kubenswrapper[5034]: I0105 23:28:40.925296 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 05 23:28:40 crc kubenswrapper[5034]: I0105 23:28:40.925571 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 05 23:28:40 crc kubenswrapper[5034]: I0105 23:28:40.933997 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m46xl"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.063529 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.065118 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.070480 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.077488 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.101786 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-config-data\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.102007 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.102061 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-scripts\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.102268 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rb27\" (UniqueName: \"kubernetes.io/projected/9d5ebff2-2a88-4059-b5dc-15a654cf534f-kube-api-access-2rb27\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.144185 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.145895 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.151480 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.164335 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.200227 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.201873 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.208682 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.210444 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.210488 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-scripts\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.210541 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-config-data\") pod \"nova-scheduler-0\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.210603 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rb27\" (UniqueName: \"kubernetes.io/projected/9d5ebff2-2a88-4059-b5dc-15a654cf534f-kube-api-access-2rb27\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.210650 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvg9l\" (UniqueName: \"kubernetes.io/projected/563f42ac-73b9-48c3-819d-c94c992570d1-kube-api-access-jvg9l\") pod \"nova-scheduler-0\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.210696 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-config-data\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.210740 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.220550 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-scripts\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.220944 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.224605 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-config-data\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.242192 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.258741 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rb27\" (UniqueName: \"kubernetes.io/projected/9d5ebff2-2a88-4059-b5dc-15a654cf534f-kube-api-access-2rb27\") pod \"nova-cell0-cell-mapping-m46xl\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.289971 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.291747 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.296504 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.312215 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.314160 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.314221 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.314245 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d145cb6-0c7c-4277-a234-90ce8cedff5b-logs\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.314310 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.314350 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7rr5\" (UniqueName: \"kubernetes.io/projected/3f60f6bc-8cb3-4466-999a-d8aefab40e16-kube-api-access-w7rr5\") pod \"nova-cell1-novncproxy-0\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.314371 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-config-data\") pod \"nova-scheduler-0\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.314391 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h77qg\" (UniqueName: \"kubernetes.io/projected/9d145cb6-0c7c-4277-a234-90ce8cedff5b-kube-api-access-h77qg\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.314444 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.314468 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-config-data\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.314512 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvg9l\" (UniqueName: \"kubernetes.io/projected/563f42ac-73b9-48c3-819d-c94c992570d1-kube-api-access-jvg9l\") pod \"nova-scheduler-0\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.324788 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-config-data\") pod \"nova-scheduler-0\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.339217 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.358733 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvg9l\" (UniqueName: \"kubernetes.io/projected/563f42ac-73b9-48c3-819d-c94c992570d1-kube-api-access-jvg9l\") pod \"nova-scheduler-0\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.387389 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.389959 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79684879-jpnp6"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.391901 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.408105 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79684879-jpnp6"] Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420273 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-logs\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420356 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420389 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420405 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52g8f\" (UniqueName: \"kubernetes.io/projected/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-kube-api-access-52g8f\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420461 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-config-data\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420529 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420609 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420639 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d145cb6-0c7c-4277-a234-90ce8cedff5b-logs\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420664 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-config-data\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420828 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7rr5\" (UniqueName: \"kubernetes.io/projected/3f60f6bc-8cb3-4466-999a-d8aefab40e16-kube-api-access-w7rr5\") pod \"nova-cell1-novncproxy-0\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.420901 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h77qg\" (UniqueName: \"kubernetes.io/projected/9d145cb6-0c7c-4277-a234-90ce8cedff5b-kube-api-access-h77qg\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.424860 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d145cb6-0c7c-4277-a234-90ce8cedff5b-logs\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.430205 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-config-data\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.434267 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.440684 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.449622 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7rr5\" (UniqueName: \"kubernetes.io/projected/3f60f6bc-8cb3-4466-999a-d8aefab40e16-kube-api-access-w7rr5\") pod \"nova-cell1-novncproxy-0\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.468569 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h77qg\" (UniqueName: \"kubernetes.io/projected/9d145cb6-0c7c-4277-a234-90ce8cedff5b-kube-api-access-h77qg\") pod \"nova-api-0\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.474861 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.475879 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.524717 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-config\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.524838 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-logs\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.524891 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-sb\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.524934 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.524966 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52g8f\" (UniqueName: \"kubernetes.io/projected/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-kube-api-access-52g8f\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.525027 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-nb\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.525109 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-dns-svc\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.525172 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-config-data\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.525241 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66dhw\" (UniqueName: \"kubernetes.io/projected/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-kube-api-access-66dhw\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.526194 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-logs\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.531860 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-config-data\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.536588 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.550224 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52g8f\" (UniqueName: \"kubernetes.io/projected/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-kube-api-access-52g8f\") pod \"nova-metadata-0\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.556676 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.626784 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-config\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.626861 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-sb\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.626910 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-nb\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.626954 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-dns-svc\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.627019 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66dhw\" (UniqueName: \"kubernetes.io/projected/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-kube-api-access-66dhw\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.627944 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-config\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.628506 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-dns-svc\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.630721 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-sb\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.637041 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-nb\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.653400 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66dhw\" (UniqueName: \"kubernetes.io/projected/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-kube-api-access-66dhw\") pod \"dnsmasq-dns-5d79684879-jpnp6\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.722759 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.793738 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.795924 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:28:41 crc kubenswrapper[5034]: I0105 23:28:41.991111 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 23:28:42 crc kubenswrapper[5034]: W0105 23:28:42.018154 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod563f42ac_73b9_48c3_819d_c94c992570d1.slice/crio-d7c20453a04897bc6e82c6d0db425ff1a8fc8a1f7cb44b51faf2d4548bda75c5 WatchSource:0}: Error finding container d7c20453a04897bc6e82c6d0db425ff1a8fc8a1f7cb44b51faf2d4548bda75c5: Status 404 returned error can't find the container with id d7c20453a04897bc6e82c6d0db425ff1a8fc8a1f7cb44b51faf2d4548bda75c5 Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.097834 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.181663 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5cwjl"] Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.195033 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.195193 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"563f42ac-73b9-48c3-819d-c94c992570d1","Type":"ContainerStarted","Data":"d7c20453a04897bc6e82c6d0db425ff1a8fc8a1f7cb44b51faf2d4548bda75c5"} Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.195280 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d145cb6-0c7c-4277-a234-90ce8cedff5b","Type":"ContainerStarted","Data":"8ac379d5d0bade03d0330ace3dbdb3f467c241b4e99d5588a23731977dfb916f"} Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.208030 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.208315 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5cwjl"] Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.208434 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.224483 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m46xl"] Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.354458 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfrj4\" (UniqueName: \"kubernetes.io/projected/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-kube-api-access-tfrj4\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.354563 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.354836 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-config-data\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.354934 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-scripts\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.396549 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79684879-jpnp6"] Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.405303 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.457630 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfrj4\" (UniqueName: \"kubernetes.io/projected/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-kube-api-access-tfrj4\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.458237 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.458353 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-config-data\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.458470 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-scripts\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.462208 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-scripts\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.468904 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.469608 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-config-data\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.488044 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfrj4\" (UniqueName: \"kubernetes.io/projected/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-kube-api-access-tfrj4\") pod \"nova-cell1-conductor-db-sync-5cwjl\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.532628 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:42 crc kubenswrapper[5034]: I0105 23:28:42.535247 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.156598 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5cwjl"] Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.209313 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6","Type":"ContainerStarted","Data":"64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.209371 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6","Type":"ContainerStarted","Data":"f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.209382 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6","Type":"ContainerStarted","Data":"9384e96d8410b83b0974c978401133220594612635ce38b970881e7149b19ed6"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.213769 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3f60f6bc-8cb3-4466-999a-d8aefab40e16","Type":"ContainerStarted","Data":"84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.213804 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3f60f6bc-8cb3-4466-999a-d8aefab40e16","Type":"ContainerStarted","Data":"fca03c0fead81f01ccd9b82e1d3f7afed18241b5f84421a145fb5df81dfa2889"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.220987 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m46xl" event={"ID":"9d5ebff2-2a88-4059-b5dc-15a654cf534f","Type":"ContainerStarted","Data":"d982c4fc689dd58835e7c49bd3f7c97f28d00e62902f3166ebef16e2c6064fbe"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.221019 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m46xl" event={"ID":"9d5ebff2-2a88-4059-b5dc-15a654cf534f","Type":"ContainerStarted","Data":"faf3c7f19b184d1fcbc7b0b6cb29d8eea3e49e9e2c5a4b71205614d818b4a308"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.235527 5034 generic.go:334] "Generic (PLEG): container finished" podID="9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" containerID="01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f" exitCode=0 Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.235595 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" event={"ID":"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97","Type":"ContainerDied","Data":"01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.235643 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" event={"ID":"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97","Type":"ContainerStarted","Data":"c2f7a562fcf602d1ed4ed0128c0acc00b64017648844c4dd8cb912b997f9c50c"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.246445 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"563f42ac-73b9-48c3-819d-c94c992570d1","Type":"ContainerStarted","Data":"0c8261df2bd5144b8f3a92f066a86448f59dfb05e0af0e9b2378796a129fffd2"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.259480 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5cwjl" event={"ID":"6b19617f-1b10-4ff2-ad1f-f31d20663dcd","Type":"ContainerStarted","Data":"77938106a56524b26d559b867ba8be278970dc84de105ae94ebed6a14834d01c"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.265383 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-m46xl" podStartSLOduration=3.265360306 podStartE2EDuration="3.265360306s" podCreationTimestamp="2026-01-05 23:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:43.257495513 +0000 UTC m=+5815.629494952" watchObservedRunningTime="2026-01-05 23:28:43.265360306 +0000 UTC m=+5815.637359755" Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.269539 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.269519184 podStartE2EDuration="2.269519184s" podCreationTimestamp="2026-01-05 23:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:43.232286958 +0000 UTC m=+5815.604286417" watchObservedRunningTime="2026-01-05 23:28:43.269519184 +0000 UTC m=+5815.641518623" Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.284312 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d145cb6-0c7c-4277-a234-90ce8cedff5b","Type":"ContainerStarted","Data":"194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.284366 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d145cb6-0c7c-4277-a234-90ce8cedff5b","Type":"ContainerStarted","Data":"8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95"} Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.309403 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.3093708250000002 podStartE2EDuration="2.309370825s" podCreationTimestamp="2026-01-05 23:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:43.283316656 +0000 UTC m=+5815.655316085" watchObservedRunningTime="2026-01-05 23:28:43.309370825 +0000 UTC m=+5815.681370264" Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.341449 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.3414229349999998 podStartE2EDuration="2.341422935s" podCreationTimestamp="2026-01-05 23:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:43.337255817 +0000 UTC m=+5815.709255256" watchObservedRunningTime="2026-01-05 23:28:43.341422935 +0000 UTC m=+5815.713422374" Jan 05 23:28:43 crc kubenswrapper[5034]: I0105 23:28:43.366099 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.366044404 podStartE2EDuration="2.366044404s" podCreationTimestamp="2026-01-05 23:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:43.360114155 +0000 UTC m=+5815.732113594" watchObservedRunningTime="2026-01-05 23:28:43.366044404 +0000 UTC m=+5815.738043843" Jan 05 23:28:44 crc kubenswrapper[5034]: I0105 23:28:44.301391 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5cwjl" event={"ID":"6b19617f-1b10-4ff2-ad1f-f31d20663dcd","Type":"ContainerStarted","Data":"193838d452ccbf14ea1c3b42b9c9d02317a91e80f551ebaa5397b9de1fe78ab4"} Jan 05 23:28:44 crc kubenswrapper[5034]: I0105 23:28:44.308543 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" event={"ID":"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97","Type":"ContainerStarted","Data":"9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23"} Jan 05 23:28:44 crc kubenswrapper[5034]: I0105 23:28:44.309401 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:44 crc kubenswrapper[5034]: I0105 23:28:44.330738 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5cwjl" podStartSLOduration=2.330713051 podStartE2EDuration="2.330713051s" podCreationTimestamp="2026-01-05 23:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:44.323602369 +0000 UTC m=+5816.695601808" watchObservedRunningTime="2026-01-05 23:28:44.330713051 +0000 UTC m=+5816.702712490" Jan 05 23:28:44 crc kubenswrapper[5034]: I0105 23:28:44.378212 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" podStartSLOduration=3.3781792680000002 podStartE2EDuration="3.378179268s" podCreationTimestamp="2026-01-05 23:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:44.353274381 +0000 UTC m=+5816.725273820" watchObservedRunningTime="2026-01-05 23:28:44.378179268 +0000 UTC m=+5816.750178707" Jan 05 23:28:45 crc kubenswrapper[5034]: I0105 23:28:45.028296 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:45 crc kubenswrapper[5034]: I0105 23:28:45.039178 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 23:28:45 crc kubenswrapper[5034]: I0105 23:28:45.313420 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" containerName="nova-metadata-log" containerID="cri-o://f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219" gracePeriod=30 Jan 05 23:28:45 crc kubenswrapper[5034]: I0105 23:28:45.313830 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3f60f6bc-8cb3-4466-999a-d8aefab40e16" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc" gracePeriod=30 Jan 05 23:28:45 crc kubenswrapper[5034]: I0105 23:28:45.315612 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" containerName="nova-metadata-metadata" containerID="cri-o://64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449" gracePeriod=30 Jan 05 23:28:45 crc kubenswrapper[5034]: I0105 23:28:45.928838 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.058995 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-combined-ca-bundle\") pod \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.059354 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-config-data\") pod \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.059440 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-logs\") pod \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.059468 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52g8f\" (UniqueName: \"kubernetes.io/projected/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-kube-api-access-52g8f\") pod \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\" (UID: \"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6\") " Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.060047 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-logs" (OuterVolumeSpecName: "logs") pod "fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" (UID: "fd79b890-3ba9-4ed7-8b30-7289e8cee5d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.066599 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-kube-api-access-52g8f" (OuterVolumeSpecName: "kube-api-access-52g8f") pod "fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" (UID: "fd79b890-3ba9-4ed7-8b30-7289e8cee5d6"). InnerVolumeSpecName "kube-api-access-52g8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.092752 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" (UID: "fd79b890-3ba9-4ed7-8b30-7289e8cee5d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.116755 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-config-data" (OuterVolumeSpecName: "config-data") pod "fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" (UID: "fd79b890-3ba9-4ed7-8b30-7289e8cee5d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.153221 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.161907 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.161949 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.161964 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52g8f\" (UniqueName: \"kubernetes.io/projected/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-kube-api-access-52g8f\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.161977 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.263498 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-config-data\") pod \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.263611 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7rr5\" (UniqueName: \"kubernetes.io/projected/3f60f6bc-8cb3-4466-999a-d8aefab40e16-kube-api-access-w7rr5\") pod \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.263683 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-combined-ca-bundle\") pod \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\" (UID: \"3f60f6bc-8cb3-4466-999a-d8aefab40e16\") " Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.267329 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f60f6bc-8cb3-4466-999a-d8aefab40e16-kube-api-access-w7rr5" (OuterVolumeSpecName: "kube-api-access-w7rr5") pod "3f60f6bc-8cb3-4466-999a-d8aefab40e16" (UID: "3f60f6bc-8cb3-4466-999a-d8aefab40e16"). InnerVolumeSpecName "kube-api-access-w7rr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.290167 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f60f6bc-8cb3-4466-999a-d8aefab40e16" (UID: "3f60f6bc-8cb3-4466-999a-d8aefab40e16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.290192 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-config-data" (OuterVolumeSpecName: "config-data") pod "3f60f6bc-8cb3-4466-999a-d8aefab40e16" (UID: "3f60f6bc-8cb3-4466-999a-d8aefab40e16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.325280 5034 generic.go:334] "Generic (PLEG): container finished" podID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" containerID="64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449" exitCode=0 Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.325318 5034 generic.go:334] "Generic (PLEG): container finished" podID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" containerID="f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219" exitCode=143 Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.325437 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.327811 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6","Type":"ContainerDied","Data":"64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449"} Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.327880 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6","Type":"ContainerDied","Data":"f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219"} Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.327901 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd79b890-3ba9-4ed7-8b30-7289e8cee5d6","Type":"ContainerDied","Data":"9384e96d8410b83b0974c978401133220594612635ce38b970881e7149b19ed6"} Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.327923 5034 scope.go:117] "RemoveContainer" containerID="64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.331561 5034 generic.go:334] "Generic (PLEG): container finished" podID="3f60f6bc-8cb3-4466-999a-d8aefab40e16" containerID="84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc" exitCode=0 Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.331706 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3f60f6bc-8cb3-4466-999a-d8aefab40e16","Type":"ContainerDied","Data":"84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc"} Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.331749 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3f60f6bc-8cb3-4466-999a-d8aefab40e16","Type":"ContainerDied","Data":"fca03c0fead81f01ccd9b82e1d3f7afed18241b5f84421a145fb5df81dfa2889"} Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.331826 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.362220 5034 scope.go:117] "RemoveContainer" containerID="f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.363600 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.365754 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.365785 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7rr5\" (UniqueName: \"kubernetes.io/projected/3f60f6bc-8cb3-4466-999a-d8aefab40e16-kube-api-access-w7rr5\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.365795 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f60f6bc-8cb3-4466-999a-d8aefab40e16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.376786 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.388737 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.388844 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.400695 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:46 crc kubenswrapper[5034]: E0105 23:28:46.401240 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" containerName="nova-metadata-metadata" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.401263 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" containerName="nova-metadata-metadata" Jan 05 23:28:46 crc kubenswrapper[5034]: E0105 23:28:46.401275 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f60f6bc-8cb3-4466-999a-d8aefab40e16" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.401283 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f60f6bc-8cb3-4466-999a-d8aefab40e16" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 23:28:46 crc kubenswrapper[5034]: E0105 23:28:46.401303 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" containerName="nova-metadata-log" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.401310 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" containerName="nova-metadata-log" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.401529 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" containerName="nova-metadata-log" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.401551 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" containerName="nova-metadata-metadata" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.401571 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f60f6bc-8cb3-4466-999a-d8aefab40e16" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.402867 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.408466 5034 scope.go:117] "RemoveContainer" containerID="64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.408657 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.408750 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 23:28:46 crc kubenswrapper[5034]: E0105 23:28:46.409501 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449\": container with ID starting with 64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449 not found: ID does not exist" containerID="64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.409532 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449"} err="failed to get container status \"64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449\": rpc error: code = NotFound desc = could not find container \"64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449\": container with ID starting with 64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449 not found: ID does not exist" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.409556 5034 scope.go:117] "RemoveContainer" containerID="f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.413439 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 23:28:46 crc kubenswrapper[5034]: E0105 23:28:46.415360 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219\": container with ID starting with f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219 not found: ID does not exist" containerID="f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.415440 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219"} err="failed to get container status \"f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219\": rpc error: code = NotFound desc = could not find container \"f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219\": container with ID starting with f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219 not found: ID does not exist" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.415481 5034 scope.go:117] "RemoveContainer" containerID="64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.416331 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449"} err="failed to get container status \"64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449\": rpc error: code = NotFound desc = could not find container \"64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449\": container with ID starting with 64c5c31b9bb245e0bb46176e9c41ae54d52de990c05e41a240468f5292c81449 not found: ID does not exist" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.416382 5034 scope.go:117] "RemoveContainer" containerID="f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.421809 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219"} err="failed to get container status \"f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219\": rpc error: code = NotFound desc = could not find container \"f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219\": container with ID starting with f8bd2ff309c945e2c069a552eefe67e08ded9da6901821f6493b5a9fa65c7219 not found: ID does not exist" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.421875 5034 scope.go:117] "RemoveContainer" containerID="84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.431515 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.442108 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.443829 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.446810 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.447088 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.450616 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.461770 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.515157 5034 scope.go:117] "RemoveContainer" containerID="84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc" Jan 05 23:28:46 crc kubenswrapper[5034]: E0105 23:28:46.515567 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc\": container with ID starting with 84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc not found: ID does not exist" containerID="84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.515604 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc"} err="failed to get container status \"84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc\": rpc error: code = NotFound desc = could not find container \"84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc\": container with ID starting with 84f84d20414183ba7dd34c854b925f710100486d38c481b60ac40e2294bb3bbc not found: ID does not exist" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.569744 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.569914 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.570025 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.570068 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2flqd\" (UniqueName: \"kubernetes.io/projected/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-kube-api-access-2flqd\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.570127 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-config-data\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.570154 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.570181 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.570198 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58869281-7ed3-419b-827e-1d2a512235e2-logs\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.570240 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7477\" (UniqueName: \"kubernetes.io/projected/58869281-7ed3-419b-827e-1d2a512235e2-kube-api-access-m7477\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.570506 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.672775 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.672844 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7477\" (UniqueName: \"kubernetes.io/projected/58869281-7ed3-419b-827e-1d2a512235e2-kube-api-access-m7477\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.672873 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58869281-7ed3-419b-827e-1d2a512235e2-logs\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.672911 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.672995 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.673115 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.673178 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.673213 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2flqd\" (UniqueName: \"kubernetes.io/projected/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-kube-api-access-2flqd\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.673241 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-config-data\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.673270 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.674631 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58869281-7ed3-419b-827e-1d2a512235e2-logs\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.677515 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.680671 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.680910 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.680968 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.682796 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.691751 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2flqd\" (UniqueName: \"kubernetes.io/projected/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-kube-api-access-2flqd\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.693519 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7477\" (UniqueName: \"kubernetes.io/projected/58869281-7ed3-419b-827e-1d2a512235e2-kube-api-access-m7477\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.698618 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-config-data\") pod \"nova-metadata-0\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.707591 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.807633 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.814262 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:46 crc kubenswrapper[5034]: I0105 23:28:46.844001 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:28:46 crc kubenswrapper[5034]: E0105 23:28:46.845542 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:28:47 crc kubenswrapper[5034]: I0105 23:28:47.317884 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 23:28:47 crc kubenswrapper[5034]: I0105 23:28:47.347509 5034 generic.go:334] "Generic (PLEG): container finished" podID="6b19617f-1b10-4ff2-ad1f-f31d20663dcd" containerID="193838d452ccbf14ea1c3b42b9c9d02317a91e80f551ebaa5397b9de1fe78ab4" exitCode=0 Jan 05 23:28:47 crc kubenswrapper[5034]: I0105 23:28:47.347657 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5cwjl" event={"ID":"6b19617f-1b10-4ff2-ad1f-f31d20663dcd","Type":"ContainerDied","Data":"193838d452ccbf14ea1c3b42b9c9d02317a91e80f551ebaa5397b9de1fe78ab4"} Jan 05 23:28:47 crc kubenswrapper[5034]: I0105 23:28:47.350156 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd","Type":"ContainerStarted","Data":"12fffafdf080995ecc666875e150f31eef171320b770aa8835af1d7249fdaa70"} Jan 05 23:28:47 crc kubenswrapper[5034]: W0105 23:28:47.394545 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58869281_7ed3_419b_827e_1d2a512235e2.slice/crio-e414d32f6e82b7c8c658b101ca8321a761d3204a4591507d8f8bb7e002b57886 WatchSource:0}: Error finding container e414d32f6e82b7c8c658b101ca8321a761d3204a4591507d8f8bb7e002b57886: Status 404 returned error can't find the container with id e414d32f6e82b7c8c658b101ca8321a761d3204a4591507d8f8bb7e002b57886 Jan 05 23:28:47 crc kubenswrapper[5034]: I0105 23:28:47.402274 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:47 crc kubenswrapper[5034]: I0105 23:28:47.854724 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f60f6bc-8cb3-4466-999a-d8aefab40e16" path="/var/lib/kubelet/pods/3f60f6bc-8cb3-4466-999a-d8aefab40e16/volumes" Jan 05 23:28:47 crc kubenswrapper[5034]: I0105 23:28:47.855835 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd79b890-3ba9-4ed7-8b30-7289e8cee5d6" path="/var/lib/kubelet/pods/fd79b890-3ba9-4ed7-8b30-7289e8cee5d6/volumes" Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.367026 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd","Type":"ContainerStarted","Data":"9988d066d7b394316e73fe2f37f69fc3145bc0a4ec0c4488c1d2c84dd555d644"} Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.369133 5034 generic.go:334] "Generic (PLEG): container finished" podID="9d5ebff2-2a88-4059-b5dc-15a654cf534f" containerID="d982c4fc689dd58835e7c49bd3f7c97f28d00e62902f3166ebef16e2c6064fbe" exitCode=0 Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.369244 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m46xl" event={"ID":"9d5ebff2-2a88-4059-b5dc-15a654cf534f","Type":"ContainerDied","Data":"d982c4fc689dd58835e7c49bd3f7c97f28d00e62902f3166ebef16e2c6064fbe"} Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.377103 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58869281-7ed3-419b-827e-1d2a512235e2","Type":"ContainerStarted","Data":"3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a"} Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.377169 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58869281-7ed3-419b-827e-1d2a512235e2","Type":"ContainerStarted","Data":"d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6"} Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.377186 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58869281-7ed3-419b-827e-1d2a512235e2","Type":"ContainerStarted","Data":"e414d32f6e82b7c8c658b101ca8321a761d3204a4591507d8f8bb7e002b57886"} Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.430329 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.430304665 podStartE2EDuration="2.430304665s" podCreationTimestamp="2026-01-05 23:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:48.425582131 +0000 UTC m=+5820.797581580" watchObservedRunningTime="2026-01-05 23:28:48.430304665 +0000 UTC m=+5820.802304114" Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.431436 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.431426247 podStartE2EDuration="2.431426247s" podCreationTimestamp="2026-01-05 23:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:48.395187778 +0000 UTC m=+5820.767187227" watchObservedRunningTime="2026-01-05 23:28:48.431426247 +0000 UTC m=+5820.803425696" Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.794760 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.945243 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfrj4\" (UniqueName: \"kubernetes.io/projected/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-kube-api-access-tfrj4\") pod \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.945605 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-config-data\") pod \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.945684 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-scripts\") pod \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.945827 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-combined-ca-bundle\") pod \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\" (UID: \"6b19617f-1b10-4ff2-ad1f-f31d20663dcd\") " Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.951934 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-kube-api-access-tfrj4" (OuterVolumeSpecName: "kube-api-access-tfrj4") pod "6b19617f-1b10-4ff2-ad1f-f31d20663dcd" (UID: "6b19617f-1b10-4ff2-ad1f-f31d20663dcd"). InnerVolumeSpecName "kube-api-access-tfrj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.953563 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-scripts" (OuterVolumeSpecName: "scripts") pod "6b19617f-1b10-4ff2-ad1f-f31d20663dcd" (UID: "6b19617f-1b10-4ff2-ad1f-f31d20663dcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.981846 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-config-data" (OuterVolumeSpecName: "config-data") pod "6b19617f-1b10-4ff2-ad1f-f31d20663dcd" (UID: "6b19617f-1b10-4ff2-ad1f-f31d20663dcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:48 crc kubenswrapper[5034]: I0105 23:28:48.988899 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b19617f-1b10-4ff2-ad1f-f31d20663dcd" (UID: "6b19617f-1b10-4ff2-ad1f-f31d20663dcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.049225 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.049274 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.049283 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.049295 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfrj4\" (UniqueName: \"kubernetes.io/projected/6b19617f-1b10-4ff2-ad1f-f31d20663dcd-kube-api-access-tfrj4\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.397643 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5cwjl" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.399113 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5cwjl" event={"ID":"6b19617f-1b10-4ff2-ad1f-f31d20663dcd","Type":"ContainerDied","Data":"77938106a56524b26d559b867ba8be278970dc84de105ae94ebed6a14834d01c"} Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.399156 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77938106a56524b26d559b867ba8be278970dc84de105ae94ebed6a14834d01c" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.443824 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 23:28:49 crc kubenswrapper[5034]: E0105 23:28:49.444445 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b19617f-1b10-4ff2-ad1f-f31d20663dcd" containerName="nova-cell1-conductor-db-sync" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.444466 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b19617f-1b10-4ff2-ad1f-f31d20663dcd" containerName="nova-cell1-conductor-db-sync" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.444723 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b19617f-1b10-4ff2-ad1f-f31d20663dcd" containerName="nova-cell1-conductor-db-sync" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.446055 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.449144 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.454009 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.480990 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvwqh\" (UniqueName: \"kubernetes.io/projected/1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff-kube-api-access-fvwqh\") pod \"nova-cell1-conductor-0\" (UID: \"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff\") " pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.481195 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff\") " pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.482194 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff\") " pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.584132 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff\") " pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.584293 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff\") " pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.584357 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvwqh\" (UniqueName: \"kubernetes.io/projected/1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff-kube-api-access-fvwqh\") pod \"nova-cell1-conductor-0\" (UID: \"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff\") " pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.596788 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff\") " pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.597654 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff\") " pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.620290 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvwqh\" (UniqueName: \"kubernetes.io/projected/1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff-kube-api-access-fvwqh\") pod \"nova-cell1-conductor-0\" (UID: \"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff\") " pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.779493 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.782619 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.786244 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-config-data\") pod \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.786381 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-scripts\") pod \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.786419 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rb27\" (UniqueName: \"kubernetes.io/projected/9d5ebff2-2a88-4059-b5dc-15a654cf534f-kube-api-access-2rb27\") pod \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.786470 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-combined-ca-bundle\") pod \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\" (UID: \"9d5ebff2-2a88-4059-b5dc-15a654cf534f\") " Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.793766 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-scripts" (OuterVolumeSpecName: "scripts") pod "9d5ebff2-2a88-4059-b5dc-15a654cf534f" (UID: "9d5ebff2-2a88-4059-b5dc-15a654cf534f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.797978 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d5ebff2-2a88-4059-b5dc-15a654cf534f-kube-api-access-2rb27" (OuterVolumeSpecName: "kube-api-access-2rb27") pod "9d5ebff2-2a88-4059-b5dc-15a654cf534f" (UID: "9d5ebff2-2a88-4059-b5dc-15a654cf534f"). InnerVolumeSpecName "kube-api-access-2rb27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.819320 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-config-data" (OuterVolumeSpecName: "config-data") pod "9d5ebff2-2a88-4059-b5dc-15a654cf534f" (UID: "9d5ebff2-2a88-4059-b5dc-15a654cf534f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.840304 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d5ebff2-2a88-4059-b5dc-15a654cf534f" (UID: "9d5ebff2-2a88-4059-b5dc-15a654cf534f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.889907 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.889953 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rb27\" (UniqueName: \"kubernetes.io/projected/9d5ebff2-2a88-4059-b5dc-15a654cf534f-kube-api-access-2rb27\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.889967 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:49 crc kubenswrapper[5034]: I0105 23:28:49.889981 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5ebff2-2a88-4059-b5dc-15a654cf534f-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.266790 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 23:28:50 crc kubenswrapper[5034]: W0105 23:28:50.268330 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f8eec6a_8f7d_4ab9_b092_49a1709ba4ff.slice/crio-eea9e776a5c66935de1640e48ae9b6c4ac205ef92dae3f2d1e435353940a9242 WatchSource:0}: Error finding container eea9e776a5c66935de1640e48ae9b6c4ac205ef92dae3f2d1e435353940a9242: Status 404 returned error can't find the container with id eea9e776a5c66935de1640e48ae9b6c4ac205ef92dae3f2d1e435353940a9242 Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.409520 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m46xl" event={"ID":"9d5ebff2-2a88-4059-b5dc-15a654cf534f","Type":"ContainerDied","Data":"faf3c7f19b184d1fcbc7b0b6cb29d8eea3e49e9e2c5a4b71205614d818b4a308"} Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.409568 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faf3c7f19b184d1fcbc7b0b6cb29d8eea3e49e9e2c5a4b71205614d818b4a308" Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.409637 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m46xl" Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.412440 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff","Type":"ContainerStarted","Data":"eea9e776a5c66935de1640e48ae9b6c4ac205ef92dae3f2d1e435353940a9242"} Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.634023 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.634412 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" containerName="nova-api-log" containerID="cri-o://8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95" gracePeriod=30 Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.634487 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" containerName="nova-api-api" containerID="cri-o://194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448" gracePeriod=30 Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.652294 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.652559 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="563f42ac-73b9-48c3-819d-c94c992570d1" containerName="nova-scheduler-scheduler" containerID="cri-o://0c8261df2bd5144b8f3a92f066a86448f59dfb05e0af0e9b2378796a129fffd2" gracePeriod=30 Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.737092 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.737501 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="58869281-7ed3-419b-827e-1d2a512235e2" containerName="nova-metadata-metadata" containerID="cri-o://3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a" gracePeriod=30 Jan 05 23:28:50 crc kubenswrapper[5034]: I0105 23:28:50.737779 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="58869281-7ed3-419b-827e-1d2a512235e2" containerName="nova-metadata-log" containerID="cri-o://d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6" gracePeriod=30 Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.165421 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.321362 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-combined-ca-bundle\") pod \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.321477 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d145cb6-0c7c-4277-a234-90ce8cedff5b-logs\") pod \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.321509 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-config-data\") pod \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.321747 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h77qg\" (UniqueName: \"kubernetes.io/projected/9d145cb6-0c7c-4277-a234-90ce8cedff5b-kube-api-access-h77qg\") pod \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\" (UID: \"9d145cb6-0c7c-4277-a234-90ce8cedff5b\") " Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.322648 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d145cb6-0c7c-4277-a234-90ce8cedff5b-logs" (OuterVolumeSpecName: "logs") pod "9d145cb6-0c7c-4277-a234-90ce8cedff5b" (UID: "9d145cb6-0c7c-4277-a234-90ce8cedff5b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.323146 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d145cb6-0c7c-4277-a234-90ce8cedff5b-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.328059 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d145cb6-0c7c-4277-a234-90ce8cedff5b-kube-api-access-h77qg" (OuterVolumeSpecName: "kube-api-access-h77qg") pod "9d145cb6-0c7c-4277-a234-90ce8cedff5b" (UID: "9d145cb6-0c7c-4277-a234-90ce8cedff5b"). InnerVolumeSpecName "kube-api-access-h77qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.352913 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d145cb6-0c7c-4277-a234-90ce8cedff5b" (UID: "9d145cb6-0c7c-4277-a234-90ce8cedff5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.353528 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-config-data" (OuterVolumeSpecName: "config-data") pod "9d145cb6-0c7c-4277-a234-90ce8cedff5b" (UID: "9d145cb6-0c7c-4277-a234-90ce8cedff5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.371393 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.424486 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.424516 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h77qg\" (UniqueName: \"kubernetes.io/projected/9d145cb6-0c7c-4277-a234-90ce8cedff5b-kube-api-access-h77qg\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.424526 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d145cb6-0c7c-4277-a234-90ce8cedff5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.428908 5034 generic.go:334] "Generic (PLEG): container finished" podID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" containerID="194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448" exitCode=0 Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.428944 5034 generic.go:334] "Generic (PLEG): container finished" podID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" containerID="8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95" exitCode=143 Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.428995 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d145cb6-0c7c-4277-a234-90ce8cedff5b","Type":"ContainerDied","Data":"194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448"} Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.429013 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.429057 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d145cb6-0c7c-4277-a234-90ce8cedff5b","Type":"ContainerDied","Data":"8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95"} Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.429071 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d145cb6-0c7c-4277-a234-90ce8cedff5b","Type":"ContainerDied","Data":"8ac379d5d0bade03d0330ace3dbdb3f467c241b4e99d5588a23731977dfb916f"} Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.429103 5034 scope.go:117] "RemoveContainer" containerID="194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.433273 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff","Type":"ContainerStarted","Data":"0c99ce22bf0b0e72827a2492cb36ca03ea1f7ef633d45b08ec21bb5d1cab0c8c"} Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.433797 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.438585 5034 generic.go:334] "Generic (PLEG): container finished" podID="58869281-7ed3-419b-827e-1d2a512235e2" containerID="3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a" exitCode=0 Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.438610 5034 generic.go:334] "Generic (PLEG): container finished" podID="58869281-7ed3-419b-827e-1d2a512235e2" containerID="d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6" exitCode=143 Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.438632 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58869281-7ed3-419b-827e-1d2a512235e2","Type":"ContainerDied","Data":"3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a"} Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.438654 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58869281-7ed3-419b-827e-1d2a512235e2","Type":"ContainerDied","Data":"d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6"} Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.438663 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"58869281-7ed3-419b-827e-1d2a512235e2","Type":"ContainerDied","Data":"e414d32f6e82b7c8c658b101ca8321a761d3204a4591507d8f8bb7e002b57886"} Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.438721 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.452149 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.452125442 podStartE2EDuration="2.452125442s" podCreationTimestamp="2026-01-05 23:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:51.449849918 +0000 UTC m=+5823.821849357" watchObservedRunningTime="2026-01-05 23:28:51.452125442 +0000 UTC m=+5823.824124891" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.469666 5034 scope.go:117] "RemoveContainer" containerID="8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.496826 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.498638 5034 scope.go:117] "RemoveContainer" containerID="194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448" Jan 05 23:28:51 crc kubenswrapper[5034]: E0105 23:28:51.499325 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448\": container with ID starting with 194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448 not found: ID does not exist" containerID="194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.499393 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448"} err="failed to get container status \"194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448\": rpc error: code = NotFound desc = could not find container \"194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448\": container with ID starting with 194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448 not found: ID does not exist" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.499445 5034 scope.go:117] "RemoveContainer" containerID="8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95" Jan 05 23:28:51 crc kubenswrapper[5034]: E0105 23:28:51.499843 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95\": container with ID starting with 8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95 not found: ID does not exist" containerID="8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.499864 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95"} err="failed to get container status \"8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95\": rpc error: code = NotFound desc = could not find container \"8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95\": container with ID starting with 8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95 not found: ID does not exist" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.499878 5034 scope.go:117] "RemoveContainer" containerID="194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.500115 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448"} err="failed to get container status \"194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448\": rpc error: code = NotFound desc = could not find container \"194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448\": container with ID starting with 194c3fe620762a02b350c97f3266e52d061e5d78f983666f101663cbe77c6448 not found: ID does not exist" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.500139 5034 scope.go:117] "RemoveContainer" containerID="8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.500376 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95"} err="failed to get container status \"8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95\": rpc error: code = NotFound desc = could not find container \"8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95\": container with ID starting with 8b5442015659affbb355d81e3245f47071b9e400f73566826d2e13135a05db95 not found: ID does not exist" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.500402 5034 scope.go:117] "RemoveContainer" containerID="3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.507100 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.518534 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 23:28:51 crc kubenswrapper[5034]: E0105 23:28:51.519051 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5ebff2-2a88-4059-b5dc-15a654cf534f" containerName="nova-manage" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.519076 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5ebff2-2a88-4059-b5dc-15a654cf534f" containerName="nova-manage" Jan 05 23:28:51 crc kubenswrapper[5034]: E0105 23:28:51.519129 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58869281-7ed3-419b-827e-1d2a512235e2" containerName="nova-metadata-metadata" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.519139 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58869281-7ed3-419b-827e-1d2a512235e2" containerName="nova-metadata-metadata" Jan 05 23:28:51 crc kubenswrapper[5034]: E0105 23:28:51.519159 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" containerName="nova-api-log" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.519169 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" containerName="nova-api-log" Jan 05 23:28:51 crc kubenswrapper[5034]: E0105 23:28:51.519193 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58869281-7ed3-419b-827e-1d2a512235e2" containerName="nova-metadata-log" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.519202 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="58869281-7ed3-419b-827e-1d2a512235e2" containerName="nova-metadata-log" Jan 05 23:28:51 crc kubenswrapper[5034]: E0105 23:28:51.519214 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" containerName="nova-api-api" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.519220 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" containerName="nova-api-api" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.519397 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" containerName="nova-api-log" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.519417 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="58869281-7ed3-419b-827e-1d2a512235e2" containerName="nova-metadata-metadata" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.519426 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" containerName="nova-api-api" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.519439 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="58869281-7ed3-419b-827e-1d2a512235e2" containerName="nova-metadata-log" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.519447 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d5ebff2-2a88-4059-b5dc-15a654cf534f" containerName="nova-manage" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.520485 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.525250 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58869281-7ed3-419b-827e-1d2a512235e2-logs\") pod \"58869281-7ed3-419b-827e-1d2a512235e2\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.525299 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7477\" (UniqueName: \"kubernetes.io/projected/58869281-7ed3-419b-827e-1d2a512235e2-kube-api-access-m7477\") pod \"58869281-7ed3-419b-827e-1d2a512235e2\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.525510 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-config-data\") pod \"58869281-7ed3-419b-827e-1d2a512235e2\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.525553 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-nova-metadata-tls-certs\") pod \"58869281-7ed3-419b-827e-1d2a512235e2\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.525604 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-combined-ca-bundle\") pod \"58869281-7ed3-419b-827e-1d2a512235e2\" (UID: \"58869281-7ed3-419b-827e-1d2a512235e2\") " Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.526822 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.529100 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58869281-7ed3-419b-827e-1d2a512235e2-logs" (OuterVolumeSpecName: "logs") pod "58869281-7ed3-419b-827e-1d2a512235e2" (UID: "58869281-7ed3-419b-827e-1d2a512235e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.532540 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58869281-7ed3-419b-827e-1d2a512235e2-kube-api-access-m7477" (OuterVolumeSpecName: "kube-api-access-m7477") pod "58869281-7ed3-419b-827e-1d2a512235e2" (UID: "58869281-7ed3-419b-827e-1d2a512235e2"). InnerVolumeSpecName "kube-api-access-m7477". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.532644 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.534549 5034 scope.go:117] "RemoveContainer" containerID="d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.568988 5034 scope.go:117] "RemoveContainer" containerID="3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.570649 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58869281-7ed3-419b-827e-1d2a512235e2" (UID: "58869281-7ed3-419b-827e-1d2a512235e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:51 crc kubenswrapper[5034]: E0105 23:28:51.570950 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a\": container with ID starting with 3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a not found: ID does not exist" containerID="3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.570995 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a"} err="failed to get container status \"3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a\": rpc error: code = NotFound desc = could not find container \"3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a\": container with ID starting with 3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a not found: ID does not exist" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.571019 5034 scope.go:117] "RemoveContainer" containerID="d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.575226 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-config-data" (OuterVolumeSpecName: "config-data") pod "58869281-7ed3-419b-827e-1d2a512235e2" (UID: "58869281-7ed3-419b-827e-1d2a512235e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:51 crc kubenswrapper[5034]: E0105 23:28:51.575252 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6\": container with ID starting with d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6 not found: ID does not exist" containerID="d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.575276 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6"} err="failed to get container status \"d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6\": rpc error: code = NotFound desc = could not find container \"d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6\": container with ID starting with d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6 not found: ID does not exist" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.575292 5034 scope.go:117] "RemoveContainer" containerID="3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.576227 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a"} err="failed to get container status \"3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a\": rpc error: code = NotFound desc = could not find container \"3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a\": container with ID starting with 3f810062d1eddbd55e16aad977b82089101d42194d80ebb812afe14ec618667a not found: ID does not exist" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.576282 5034 scope.go:117] "RemoveContainer" containerID="d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.576571 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6"} err="failed to get container status \"d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6\": rpc error: code = NotFound desc = could not find container \"d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6\": container with ID starting with d23833f6e7236d544ea8f0e2b8ec5597ddbb5690c2b7f936d51113f5ec32e4d6 not found: ID does not exist" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.586841 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "58869281-7ed3-419b-827e-1d2a512235e2" (UID: "58869281-7ed3-419b-827e-1d2a512235e2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.628649 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-config-data\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.629537 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbsh4\" (UniqueName: \"kubernetes.io/projected/8f16a9d1-a973-4a85-b31a-86f7432d41d4-kube-api-access-pbsh4\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.629575 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f16a9d1-a973-4a85-b31a-86f7432d41d4-logs\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.629615 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.630695 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.630771 5034 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.630788 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58869281-7ed3-419b-827e-1d2a512235e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.630801 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58869281-7ed3-419b-827e-1d2a512235e2-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.630812 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7477\" (UniqueName: \"kubernetes.io/projected/58869281-7ed3-419b-827e-1d2a512235e2-kube-api-access-m7477\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.732905 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-config-data\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.733058 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbsh4\" (UniqueName: \"kubernetes.io/projected/8f16a9d1-a973-4a85-b31a-86f7432d41d4-kube-api-access-pbsh4\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.733109 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f16a9d1-a973-4a85-b31a-86f7432d41d4-logs\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.733140 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.733667 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f16a9d1-a973-4a85-b31a-86f7432d41d4-logs\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.736952 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.738296 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-config-data\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.752670 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbsh4\" (UniqueName: \"kubernetes.io/projected/8f16a9d1-a973-4a85-b31a-86f7432d41d4-kube-api-access-pbsh4\") pod \"nova-api-0\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.796245 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.815492 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.860688 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.875470 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d145cb6-0c7c-4277-a234-90ce8cedff5b" path="/var/lib/kubelet/pods/9d145cb6-0c7c-4277-a234-90ce8cedff5b/volumes" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.891297 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d77b7579-w6pzc"] Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.891664 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" podUID="95c503a0-b4db-4e28-913a-830f750ebe0a" containerName="dnsmasq-dns" containerID="cri-o://b9043fb2a3bbf5e57f5cc037fc18426de8acda3ae4e704dee45e76ef10bffa78" gracePeriod=10 Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.900374 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.932778 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.962098 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.965211 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.970329 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.970444 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 23:28:51 crc kubenswrapper[5034]: I0105 23:28:51.982768 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.143526 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-config-data\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.143619 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.143701 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72500551-77bd-417e-b221-1f1b47b84373-logs\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.143751 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7h57\" (UniqueName: \"kubernetes.io/projected/72500551-77bd-417e-b221-1f1b47b84373-kube-api-access-m7h57\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.143789 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.245452 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-config-data\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.245548 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.245599 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72500551-77bd-417e-b221-1f1b47b84373-logs\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.245637 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7h57\" (UniqueName: \"kubernetes.io/projected/72500551-77bd-417e-b221-1f1b47b84373-kube-api-access-m7h57\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.245670 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.246285 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72500551-77bd-417e-b221-1f1b47b84373-logs\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.252923 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.252933 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.256508 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-config-data\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.273135 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7h57\" (UniqueName: \"kubernetes.io/projected/72500551-77bd-417e-b221-1f1b47b84373-kube-api-access-m7h57\") pod \"nova-metadata-0\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.353787 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.435177 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:28:52 crc kubenswrapper[5034]: W0105 23:28:52.459421 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f16a9d1_a973_4a85_b31a_86f7432d41d4.slice/crio-785af509d812540b3129a554be6e8e7b9c93a6de23d05ed39e3ded7faa2a383d WatchSource:0}: Error finding container 785af509d812540b3129a554be6e8e7b9c93a6de23d05ed39e3ded7faa2a383d: Status 404 returned error can't find the container with id 785af509d812540b3129a554be6e8e7b9c93a6de23d05ed39e3ded7faa2a383d Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.499131 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.502179 5034 generic.go:334] "Generic (PLEG): container finished" podID="95c503a0-b4db-4e28-913a-830f750ebe0a" containerID="b9043fb2a3bbf5e57f5cc037fc18426de8acda3ae4e704dee45e76ef10bffa78" exitCode=0 Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.502879 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" event={"ID":"95c503a0-b4db-4e28-913a-830f750ebe0a","Type":"ContainerDied","Data":"b9043fb2a3bbf5e57f5cc037fc18426de8acda3ae4e704dee45e76ef10bffa78"} Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.502961 5034 scope.go:117] "RemoveContainer" containerID="b9043fb2a3bbf5e57f5cc037fc18426de8acda3ae4e704dee45e76ef10bffa78" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.544813 5034 scope.go:117] "RemoveContainer" containerID="b99f597aad7040621f55fbadf649a86f218accafcfb6e38c31ee5c30ceb2bd31" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.660758 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-sb\") pod \"95c503a0-b4db-4e28-913a-830f750ebe0a\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.660835 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-nb\") pod \"95c503a0-b4db-4e28-913a-830f750ebe0a\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.660875 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-dns-svc\") pod \"95c503a0-b4db-4e28-913a-830f750ebe0a\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.660994 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-887bn\" (UniqueName: \"kubernetes.io/projected/95c503a0-b4db-4e28-913a-830f750ebe0a-kube-api-access-887bn\") pod \"95c503a0-b4db-4e28-913a-830f750ebe0a\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.661025 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-config\") pod \"95c503a0-b4db-4e28-913a-830f750ebe0a\" (UID: \"95c503a0-b4db-4e28-913a-830f750ebe0a\") " Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.670782 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c503a0-b4db-4e28-913a-830f750ebe0a-kube-api-access-887bn" (OuterVolumeSpecName: "kube-api-access-887bn") pod "95c503a0-b4db-4e28-913a-830f750ebe0a" (UID: "95c503a0-b4db-4e28-913a-830f750ebe0a"). InnerVolumeSpecName "kube-api-access-887bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.720658 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95c503a0-b4db-4e28-913a-830f750ebe0a" (UID: "95c503a0-b4db-4e28-913a-830f750ebe0a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.721252 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95c503a0-b4db-4e28-913a-830f750ebe0a" (UID: "95c503a0-b4db-4e28-913a-830f750ebe0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.728830 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95c503a0-b4db-4e28-913a-830f750ebe0a" (UID: "95c503a0-b4db-4e28-913a-830f750ebe0a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.733433 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-config" (OuterVolumeSpecName: "config") pod "95c503a0-b4db-4e28-913a-830f750ebe0a" (UID: "95c503a0-b4db-4e28-913a-830f750ebe0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.764253 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-887bn\" (UniqueName: \"kubernetes.io/projected/95c503a0-b4db-4e28-913a-830f750ebe0a-kube-api-access-887bn\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.764295 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.764307 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.764331 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.764339 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c503a0-b4db-4e28-913a-830f750ebe0a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:28:52 crc kubenswrapper[5034]: I0105 23:28:52.870274 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:28:52 crc kubenswrapper[5034]: W0105 23:28:52.882118 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72500551_77bd_417e_b221_1f1b47b84373.slice/crio-822703aac7fb7e9c9e6d081350fbf855b6fa83d60384164b4d1abfbf55542a4b WatchSource:0}: Error finding container 822703aac7fb7e9c9e6d081350fbf855b6fa83d60384164b4d1abfbf55542a4b: Status 404 returned error can't find the container with id 822703aac7fb7e9c9e6d081350fbf855b6fa83d60384164b4d1abfbf55542a4b Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.512937 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f16a9d1-a973-4a85-b31a-86f7432d41d4","Type":"ContainerStarted","Data":"c132f56f134a1a3e60bc277bd2a3ff68530ec887930cb1da18f1ae44a2e99706"} Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.512991 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f16a9d1-a973-4a85-b31a-86f7432d41d4","Type":"ContainerStarted","Data":"8dd3ed86e94ce5a241b8e6895ee26f85094125cfb1c85b8acdb6351a72f8bc1c"} Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.513003 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f16a9d1-a973-4a85-b31a-86f7432d41d4","Type":"ContainerStarted","Data":"785af509d812540b3129a554be6e8e7b9c93a6de23d05ed39e3ded7faa2a383d"} Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.514994 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72500551-77bd-417e-b221-1f1b47b84373","Type":"ContainerStarted","Data":"e43198f3e0afb2c482419addfa9ffe06c69e6dc57acb538cea0a31d10830d314"} Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.515056 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72500551-77bd-417e-b221-1f1b47b84373","Type":"ContainerStarted","Data":"9d8ad031803098c4ea7fc6879c9f9b50adc7c006ad8a355e7661d71ee4741182"} Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.515071 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72500551-77bd-417e-b221-1f1b47b84373","Type":"ContainerStarted","Data":"822703aac7fb7e9c9e6d081350fbf855b6fa83d60384164b4d1abfbf55542a4b"} Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.516452 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" event={"ID":"95c503a0-b4db-4e28-913a-830f750ebe0a","Type":"ContainerDied","Data":"1617386dc2a14c41332c144fd71a408df0e6cab58961c2b4f2586a934b08dc73"} Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.516478 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d77b7579-w6pzc" Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.529277 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5292607 podStartE2EDuration="2.5292607s" podCreationTimestamp="2026-01-05 23:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:53.528462528 +0000 UTC m=+5825.900461967" watchObservedRunningTime="2026-01-05 23:28:53.5292607 +0000 UTC m=+5825.901260139" Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.570107 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.570051538 podStartE2EDuration="2.570051538s" podCreationTimestamp="2026-01-05 23:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:28:53.554991511 +0000 UTC m=+5825.926990950" watchObservedRunningTime="2026-01-05 23:28:53.570051538 +0000 UTC m=+5825.942050977" Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.580225 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d77b7579-w6pzc"] Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.590927 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d77b7579-w6pzc"] Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.850101 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58869281-7ed3-419b-827e-1d2a512235e2" path="/var/lib/kubelet/pods/58869281-7ed3-419b-827e-1d2a512235e2/volumes" Jan 05 23:28:53 crc kubenswrapper[5034]: I0105 23:28:53.850730 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c503a0-b4db-4e28-913a-830f750ebe0a" path="/var/lib/kubelet/pods/95c503a0-b4db-4e28-913a-830f750ebe0a/volumes" Jan 05 23:28:56 crc kubenswrapper[5034]: I0105 23:28:56.815234 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:56 crc kubenswrapper[5034]: I0105 23:28:56.847605 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:57 crc kubenswrapper[5034]: I0105 23:28:57.353969 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 23:28:57 crc kubenswrapper[5034]: I0105 23:28:57.354038 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 23:28:57 crc kubenswrapper[5034]: I0105 23:28:57.568346 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 05 23:28:59 crc kubenswrapper[5034]: I0105 23:28:59.806986 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 05 23:28:59 crc kubenswrapper[5034]: I0105 23:28:59.838808 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:28:59 crc kubenswrapper[5034]: E0105 23:28:59.839197 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.241347 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qxmd7"] Jan 05 23:29:00 crc kubenswrapper[5034]: E0105 23:29:00.241819 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c503a0-b4db-4e28-913a-830f750ebe0a" containerName="dnsmasq-dns" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.241838 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c503a0-b4db-4e28-913a-830f750ebe0a" containerName="dnsmasq-dns" Jan 05 23:29:00 crc kubenswrapper[5034]: E0105 23:29:00.241860 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c503a0-b4db-4e28-913a-830f750ebe0a" containerName="init" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.241867 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c503a0-b4db-4e28-913a-830f750ebe0a" containerName="init" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.242063 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c503a0-b4db-4e28-913a-830f750ebe0a" containerName="dnsmasq-dns" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.242858 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.244919 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.245217 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.251913 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qxmd7"] Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.417290 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-config-data\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.417420 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd7pg\" (UniqueName: \"kubernetes.io/projected/bcf16341-2b6a-4d34-b363-09567d1148e9-kube-api-access-rd7pg\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.419475 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.419598 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-scripts\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.523479 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-config-data\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.523539 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd7pg\" (UniqueName: \"kubernetes.io/projected/bcf16341-2b6a-4d34-b363-09567d1148e9-kube-api-access-rd7pg\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.523567 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.523601 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-scripts\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.531051 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.531167 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-scripts\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.531229 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-config-data\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.541888 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd7pg\" (UniqueName: \"kubernetes.io/projected/bcf16341-2b6a-4d34-b363-09567d1148e9-kube-api-access-rd7pg\") pod \"nova-cell1-cell-mapping-qxmd7\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:00 crc kubenswrapper[5034]: I0105 23:29:00.571299 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:01 crc kubenswrapper[5034]: I0105 23:29:01.023927 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qxmd7"] Jan 05 23:29:01 crc kubenswrapper[5034]: I0105 23:29:01.591635 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qxmd7" event={"ID":"bcf16341-2b6a-4d34-b363-09567d1148e9","Type":"ContainerStarted","Data":"8bf44c442112e405e57e853421b439210a2cb75050cc6fbfd420110c5736446e"} Jan 05 23:29:01 crc kubenswrapper[5034]: I0105 23:29:01.591969 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qxmd7" event={"ID":"bcf16341-2b6a-4d34-b363-09567d1148e9","Type":"ContainerStarted","Data":"725b68fb3052d308e36f7f9de39fdc9cc58b9f727368ff3d88a7cee65edf1f80"} Jan 05 23:29:01 crc kubenswrapper[5034]: I0105 23:29:01.608379 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qxmd7" podStartSLOduration=1.608359191 podStartE2EDuration="1.608359191s" podCreationTimestamp="2026-01-05 23:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:29:01.605360696 +0000 UTC m=+5833.977360135" watchObservedRunningTime="2026-01-05 23:29:01.608359191 +0000 UTC m=+5833.980358630" Jan 05 23:29:01 crc kubenswrapper[5034]: I0105 23:29:01.862089 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 23:29:01 crc kubenswrapper[5034]: I0105 23:29:01.862150 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 23:29:02 crc kubenswrapper[5034]: I0105 23:29:02.354735 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 23:29:02 crc kubenswrapper[5034]: I0105 23:29:02.354780 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 23:29:02 crc kubenswrapper[5034]: I0105 23:29:02.944301 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 23:29:02 crc kubenswrapper[5034]: I0105 23:29:02.944329 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 23:29:03 crc kubenswrapper[5034]: I0105 23:29:03.370325 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="72500551-77bd-417e-b221-1f1b47b84373" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.87:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 23:29:03 crc kubenswrapper[5034]: I0105 23:29:03.370326 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="72500551-77bd-417e-b221-1f1b47b84373" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.87:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 23:29:06 crc kubenswrapper[5034]: I0105 23:29:06.647713 5034 generic.go:334] "Generic (PLEG): container finished" podID="bcf16341-2b6a-4d34-b363-09567d1148e9" containerID="8bf44c442112e405e57e853421b439210a2cb75050cc6fbfd420110c5736446e" exitCode=0 Jan 05 23:29:06 crc kubenswrapper[5034]: I0105 23:29:06.647807 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qxmd7" event={"ID":"bcf16341-2b6a-4d34-b363-09567d1148e9","Type":"ContainerDied","Data":"8bf44c442112e405e57e853421b439210a2cb75050cc6fbfd420110c5736446e"} Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.044446 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.200809 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-combined-ca-bundle\") pod \"bcf16341-2b6a-4d34-b363-09567d1148e9\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.201151 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-config-data\") pod \"bcf16341-2b6a-4d34-b363-09567d1148e9\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.201189 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd7pg\" (UniqueName: \"kubernetes.io/projected/bcf16341-2b6a-4d34-b363-09567d1148e9-kube-api-access-rd7pg\") pod \"bcf16341-2b6a-4d34-b363-09567d1148e9\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.201273 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-scripts\") pod \"bcf16341-2b6a-4d34-b363-09567d1148e9\" (UID: \"bcf16341-2b6a-4d34-b363-09567d1148e9\") " Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.210478 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf16341-2b6a-4d34-b363-09567d1148e9-kube-api-access-rd7pg" (OuterVolumeSpecName: "kube-api-access-rd7pg") pod "bcf16341-2b6a-4d34-b363-09567d1148e9" (UID: "bcf16341-2b6a-4d34-b363-09567d1148e9"). InnerVolumeSpecName "kube-api-access-rd7pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.212134 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-scripts" (OuterVolumeSpecName: "scripts") pod "bcf16341-2b6a-4d34-b363-09567d1148e9" (UID: "bcf16341-2b6a-4d34-b363-09567d1148e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.233010 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcf16341-2b6a-4d34-b363-09567d1148e9" (UID: "bcf16341-2b6a-4d34-b363-09567d1148e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.235946 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-config-data" (OuterVolumeSpecName: "config-data") pod "bcf16341-2b6a-4d34-b363-09567d1148e9" (UID: "bcf16341-2b6a-4d34-b363-09567d1148e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.303513 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.303558 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd7pg\" (UniqueName: \"kubernetes.io/projected/bcf16341-2b6a-4d34-b363-09567d1148e9-kube-api-access-rd7pg\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.303569 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.303581 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf16341-2b6a-4d34-b363-09567d1148e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.671149 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qxmd7" event={"ID":"bcf16341-2b6a-4d34-b363-09567d1148e9","Type":"ContainerDied","Data":"725b68fb3052d308e36f7f9de39fdc9cc58b9f727368ff3d88a7cee65edf1f80"} Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.671612 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="725b68fb3052d308e36f7f9de39fdc9cc58b9f727368ff3d88a7cee65edf1f80" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.671284 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qxmd7" Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.857783 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.858072 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerName="nova-api-log" containerID="cri-o://8dd3ed86e94ce5a241b8e6895ee26f85094125cfb1c85b8acdb6351a72f8bc1c" gracePeriod=30 Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.858260 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerName="nova-api-api" containerID="cri-o://c132f56f134a1a3e60bc277bd2a3ff68530ec887930cb1da18f1ae44a2e99706" gracePeriod=30 Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.880161 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.880465 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="72500551-77bd-417e-b221-1f1b47b84373" containerName="nova-metadata-log" containerID="cri-o://9d8ad031803098c4ea7fc6879c9f9b50adc7c006ad8a355e7661d71ee4741182" gracePeriod=30 Jan 05 23:29:08 crc kubenswrapper[5034]: I0105 23:29:08.880581 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="72500551-77bd-417e-b221-1f1b47b84373" containerName="nova-metadata-metadata" containerID="cri-o://e43198f3e0afb2c482419addfa9ffe06c69e6dc57acb538cea0a31d10830d314" gracePeriod=30 Jan 05 23:29:09 crc kubenswrapper[5034]: I0105 23:29:09.681903 5034 generic.go:334] "Generic (PLEG): container finished" podID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerID="8dd3ed86e94ce5a241b8e6895ee26f85094125cfb1c85b8acdb6351a72f8bc1c" exitCode=143 Jan 05 23:29:09 crc kubenswrapper[5034]: I0105 23:29:09.681995 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f16a9d1-a973-4a85-b31a-86f7432d41d4","Type":"ContainerDied","Data":"8dd3ed86e94ce5a241b8e6895ee26f85094125cfb1c85b8acdb6351a72f8bc1c"} Jan 05 23:29:09 crc kubenswrapper[5034]: I0105 23:29:09.684183 5034 generic.go:334] "Generic (PLEG): container finished" podID="72500551-77bd-417e-b221-1f1b47b84373" containerID="9d8ad031803098c4ea7fc6879c9f9b50adc7c006ad8a355e7661d71ee4741182" exitCode=143 Jan 05 23:29:09 crc kubenswrapper[5034]: I0105 23:29:09.684232 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72500551-77bd-417e-b221-1f1b47b84373","Type":"ContainerDied","Data":"9d8ad031803098c4ea7fc6879c9f9b50adc7c006ad8a355e7661d71ee4741182"} Jan 05 23:29:14 crc kubenswrapper[5034]: I0105 23:29:14.841059 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:29:14 crc kubenswrapper[5034]: E0105 23:29:14.842105 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:29:20 crc kubenswrapper[5034]: I0105 23:29:20.809430 5034 generic.go:334] "Generic (PLEG): container finished" podID="563f42ac-73b9-48c3-819d-c94c992570d1" containerID="0c8261df2bd5144b8f3a92f066a86448f59dfb05e0af0e9b2378796a129fffd2" exitCode=137 Jan 05 23:29:20 crc kubenswrapper[5034]: I0105 23:29:20.810107 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"563f42ac-73b9-48c3-819d-c94c992570d1","Type":"ContainerDied","Data":"0c8261df2bd5144b8f3a92f066a86448f59dfb05e0af0e9b2378796a129fffd2"} Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.039287 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.179444 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-combined-ca-bundle\") pod \"563f42ac-73b9-48c3-819d-c94c992570d1\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.179540 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-config-data\") pod \"563f42ac-73b9-48c3-819d-c94c992570d1\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.179791 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvg9l\" (UniqueName: \"kubernetes.io/projected/563f42ac-73b9-48c3-819d-c94c992570d1-kube-api-access-jvg9l\") pod \"563f42ac-73b9-48c3-819d-c94c992570d1\" (UID: \"563f42ac-73b9-48c3-819d-c94c992570d1\") " Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.186300 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563f42ac-73b9-48c3-819d-c94c992570d1-kube-api-access-jvg9l" (OuterVolumeSpecName: "kube-api-access-jvg9l") pod "563f42ac-73b9-48c3-819d-c94c992570d1" (UID: "563f42ac-73b9-48c3-819d-c94c992570d1"). InnerVolumeSpecName "kube-api-access-jvg9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.212766 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "563f42ac-73b9-48c3-819d-c94c992570d1" (UID: "563f42ac-73b9-48c3-819d-c94c992570d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.228929 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-config-data" (OuterVolumeSpecName: "config-data") pod "563f42ac-73b9-48c3-819d-c94c992570d1" (UID: "563f42ac-73b9-48c3-819d-c94c992570d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.282680 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvg9l\" (UniqueName: \"kubernetes.io/projected/563f42ac-73b9-48c3-819d-c94c992570d1-kube-api-access-jvg9l\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.282726 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.282736 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563f42ac-73b9-48c3-819d-c94c992570d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.822275 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"563f42ac-73b9-48c3-819d-c94c992570d1","Type":"ContainerDied","Data":"d7c20453a04897bc6e82c6d0db425ff1a8fc8a1f7cb44b51faf2d4548bda75c5"} Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.822323 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.822346 5034 scope.go:117] "RemoveContainer" containerID="0c8261df2bd5144b8f3a92f066a86448f59dfb05e0af0e9b2378796a129fffd2" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.862225 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.862281 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.863188 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.874094 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.895672 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 23:29:21 crc kubenswrapper[5034]: E0105 23:29:21.896447 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563f42ac-73b9-48c3-819d-c94c992570d1" containerName="nova-scheduler-scheduler" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.896534 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="563f42ac-73b9-48c3-819d-c94c992570d1" containerName="nova-scheduler-scheduler" Jan 05 23:29:21 crc kubenswrapper[5034]: E0105 23:29:21.896629 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf16341-2b6a-4d34-b363-09567d1148e9" containerName="nova-manage" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.896690 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf16341-2b6a-4d34-b363-09567d1148e9" containerName="nova-manage" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.897001 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf16341-2b6a-4d34-b363-09567d1148e9" containerName="nova-manage" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.897097 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="563f42ac-73b9-48c3-819d-c94c992570d1" containerName="nova-scheduler-scheduler" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.898055 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.910836 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.914406 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.997224 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc18f0e-03e7-493a-a861-454e4b8140c5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bc18f0e-03e7-493a-a861-454e4b8140c5\") " pod="openstack/nova-scheduler-0" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.997330 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74xvp\" (UniqueName: \"kubernetes.io/projected/9bc18f0e-03e7-493a-a861-454e4b8140c5-kube-api-access-74xvp\") pod \"nova-scheduler-0\" (UID: \"9bc18f0e-03e7-493a-a861-454e4b8140c5\") " pod="openstack/nova-scheduler-0" Jan 05 23:29:21 crc kubenswrapper[5034]: I0105 23:29:21.997370 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc18f0e-03e7-493a-a861-454e4b8140c5-config-data\") pod \"nova-scheduler-0\" (UID: \"9bc18f0e-03e7-493a-a861-454e4b8140c5\") " pod="openstack/nova-scheduler-0" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.099775 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc18f0e-03e7-493a-a861-454e4b8140c5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bc18f0e-03e7-493a-a861-454e4b8140c5\") " pod="openstack/nova-scheduler-0" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.099881 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74xvp\" (UniqueName: \"kubernetes.io/projected/9bc18f0e-03e7-493a-a861-454e4b8140c5-kube-api-access-74xvp\") pod \"nova-scheduler-0\" (UID: \"9bc18f0e-03e7-493a-a861-454e4b8140c5\") " pod="openstack/nova-scheduler-0" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.100258 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc18f0e-03e7-493a-a861-454e4b8140c5-config-data\") pod \"nova-scheduler-0\" (UID: \"9bc18f0e-03e7-493a-a861-454e4b8140c5\") " pod="openstack/nova-scheduler-0" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.105970 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc18f0e-03e7-493a-a861-454e4b8140c5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bc18f0e-03e7-493a-a861-454e4b8140c5\") " pod="openstack/nova-scheduler-0" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.107812 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc18f0e-03e7-493a-a861-454e4b8140c5-config-data\") pod \"nova-scheduler-0\" (UID: \"9bc18f0e-03e7-493a-a861-454e4b8140c5\") " pod="openstack/nova-scheduler-0" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.119278 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74xvp\" (UniqueName: \"kubernetes.io/projected/9bc18f0e-03e7-493a-a861-454e4b8140c5-kube-api-access-74xvp\") pod \"nova-scheduler-0\" (UID: \"9bc18f0e-03e7-493a-a861-454e4b8140c5\") " pod="openstack/nova-scheduler-0" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.225521 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.727191 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 23:29:22 crc kubenswrapper[5034]: W0105 23:29:22.730332 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bc18f0e_03e7_493a_a861_454e4b8140c5.slice/crio-7ccc6f1c55397f1e9255c87304762f271d219af84ef2dbf75b45c8a1e0f0e277 WatchSource:0}: Error finding container 7ccc6f1c55397f1e9255c87304762f271d219af84ef2dbf75b45c8a1e0f0e277: Status 404 returned error can't find the container with id 7ccc6f1c55397f1e9255c87304762f271d219af84ef2dbf75b45c8a1e0f0e277 Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.836632 5034 generic.go:334] "Generic (PLEG): container finished" podID="72500551-77bd-417e-b221-1f1b47b84373" containerID="e43198f3e0afb2c482419addfa9ffe06c69e6dc57acb538cea0a31d10830d314" exitCode=0 Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.836737 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72500551-77bd-417e-b221-1f1b47b84373","Type":"ContainerDied","Data":"e43198f3e0afb2c482419addfa9ffe06c69e6dc57acb538cea0a31d10830d314"} Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.836815 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72500551-77bd-417e-b221-1f1b47b84373","Type":"ContainerDied","Data":"822703aac7fb7e9c9e6d081350fbf855b6fa83d60384164b4d1abfbf55542a4b"} Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.836834 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="822703aac7fb7e9c9e6d081350fbf855b6fa83d60384164b4d1abfbf55542a4b" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.838499 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bc18f0e-03e7-493a-a861-454e4b8140c5","Type":"ContainerStarted","Data":"7ccc6f1c55397f1e9255c87304762f271d219af84ef2dbf75b45c8a1e0f0e277"} Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.843909 5034 generic.go:334] "Generic (PLEG): container finished" podID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerID="c132f56f134a1a3e60bc277bd2a3ff68530ec887930cb1da18f1ae44a2e99706" exitCode=0 Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.844009 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f16a9d1-a973-4a85-b31a-86f7432d41d4","Type":"ContainerDied","Data":"c132f56f134a1a3e60bc277bd2a3ff68530ec887930cb1da18f1ae44a2e99706"} Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.844048 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f16a9d1-a973-4a85-b31a-86f7432d41d4","Type":"ContainerDied","Data":"785af509d812540b3129a554be6e8e7b9c93a6de23d05ed39e3ded7faa2a383d"} Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.844064 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785af509d812540b3129a554be6e8e7b9c93a6de23d05ed39e3ded7faa2a383d" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.905879 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:29:22 crc kubenswrapper[5034]: I0105 23:29:22.912604 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.028812 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-combined-ca-bundle\") pod \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.029259 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-config-data\") pod \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.030203 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f16a9d1-a973-4a85-b31a-86f7432d41d4-logs\") pod \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.032426 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7h57\" (UniqueName: \"kubernetes.io/projected/72500551-77bd-417e-b221-1f1b47b84373-kube-api-access-m7h57\") pod \"72500551-77bd-417e-b221-1f1b47b84373\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.032514 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-config-data\") pod \"72500551-77bd-417e-b221-1f1b47b84373\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.032597 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72500551-77bd-417e-b221-1f1b47b84373-logs\") pod \"72500551-77bd-417e-b221-1f1b47b84373\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.032655 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbsh4\" (UniqueName: \"kubernetes.io/projected/8f16a9d1-a973-4a85-b31a-86f7432d41d4-kube-api-access-pbsh4\") pod \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\" (UID: \"8f16a9d1-a973-4a85-b31a-86f7432d41d4\") " Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.032723 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-nova-metadata-tls-certs\") pod \"72500551-77bd-417e-b221-1f1b47b84373\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.032776 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-combined-ca-bundle\") pod \"72500551-77bd-417e-b221-1f1b47b84373\" (UID: \"72500551-77bd-417e-b221-1f1b47b84373\") " Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.035814 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f16a9d1-a973-4a85-b31a-86f7432d41d4-logs" (OuterVolumeSpecName: "logs") pod "8f16a9d1-a973-4a85-b31a-86f7432d41d4" (UID: "8f16a9d1-a973-4a85-b31a-86f7432d41d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.037508 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72500551-77bd-417e-b221-1f1b47b84373-logs" (OuterVolumeSpecName: "logs") pod "72500551-77bd-417e-b221-1f1b47b84373" (UID: "72500551-77bd-417e-b221-1f1b47b84373"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.040403 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f16a9d1-a973-4a85-b31a-86f7432d41d4-kube-api-access-pbsh4" (OuterVolumeSpecName: "kube-api-access-pbsh4") pod "8f16a9d1-a973-4a85-b31a-86f7432d41d4" (UID: "8f16a9d1-a973-4a85-b31a-86f7432d41d4"). InnerVolumeSpecName "kube-api-access-pbsh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.067743 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72500551-77bd-417e-b221-1f1b47b84373-kube-api-access-m7h57" (OuterVolumeSpecName: "kube-api-access-m7h57") pod "72500551-77bd-417e-b221-1f1b47b84373" (UID: "72500551-77bd-417e-b221-1f1b47b84373"). InnerVolumeSpecName "kube-api-access-m7h57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.078375 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-config-data" (OuterVolumeSpecName: "config-data") pod "8f16a9d1-a973-4a85-b31a-86f7432d41d4" (UID: "8f16a9d1-a973-4a85-b31a-86f7432d41d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.078638 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72500551-77bd-417e-b221-1f1b47b84373" (UID: "72500551-77bd-417e-b221-1f1b47b84373"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.099742 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f16a9d1-a973-4a85-b31a-86f7432d41d4" (UID: "8f16a9d1-a973-4a85-b31a-86f7432d41d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.103187 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-config-data" (OuterVolumeSpecName: "config-data") pod "72500551-77bd-417e-b221-1f1b47b84373" (UID: "72500551-77bd-417e-b221-1f1b47b84373"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.135463 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.135508 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f16a9d1-a973-4a85-b31a-86f7432d41d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.135523 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f16a9d1-a973-4a85-b31a-86f7432d41d4-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.135533 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7h57\" (UniqueName: \"kubernetes.io/projected/72500551-77bd-417e-b221-1f1b47b84373-kube-api-access-m7h57\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.135548 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.135559 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72500551-77bd-417e-b221-1f1b47b84373-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.135571 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbsh4\" (UniqueName: \"kubernetes.io/projected/8f16a9d1-a973-4a85-b31a-86f7432d41d4-kube-api-access-pbsh4\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.135582 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.145230 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "72500551-77bd-417e-b221-1f1b47b84373" (UID: "72500551-77bd-417e-b221-1f1b47b84373"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.237546 5034 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/72500551-77bd-417e-b221-1f1b47b84373-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.851740 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563f42ac-73b9-48c3-819d-c94c992570d1" path="/var/lib/kubelet/pods/563f42ac-73b9-48c3-819d-c94c992570d1/volumes" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.880276 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bc18f0e-03e7-493a-a861-454e4b8140c5","Type":"ContainerStarted","Data":"195a0797d0e67afd99530691626e9dfc5f45e1448cd4b3b0e61030b9a163a27d"} Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.880354 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.880353 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.916123 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.916088362 podStartE2EDuration="2.916088362s" podCreationTimestamp="2026-01-05 23:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:29:23.907744675 +0000 UTC m=+5856.279744114" watchObservedRunningTime="2026-01-05 23:29:23.916088362 +0000 UTC m=+5856.288087801" Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.940415 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:29:23 crc kubenswrapper[5034]: I0105 23:29:23.960474 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.967311 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.978758 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.987918 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:29:24 crc kubenswrapper[5034]: E0105 23:29:23.988610 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerName="nova-api-log" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.988634 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerName="nova-api-log" Jan 05 23:29:24 crc kubenswrapper[5034]: E0105 23:29:23.988647 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72500551-77bd-417e-b221-1f1b47b84373" containerName="nova-metadata-metadata" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.988656 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="72500551-77bd-417e-b221-1f1b47b84373" containerName="nova-metadata-metadata" Jan 05 23:29:24 crc kubenswrapper[5034]: E0105 23:29:23.988681 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerName="nova-api-api" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.988690 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerName="nova-api-api" Jan 05 23:29:24 crc kubenswrapper[5034]: E0105 23:29:23.988700 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72500551-77bd-417e-b221-1f1b47b84373" containerName="nova-metadata-log" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.988708 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="72500551-77bd-417e-b221-1f1b47b84373" containerName="nova-metadata-log" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.988961 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerName="nova-api-log" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.988995 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" containerName="nova-api-api" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.989025 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="72500551-77bd-417e-b221-1f1b47b84373" containerName="nova-metadata-metadata" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.989040 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="72500551-77bd-417e-b221-1f1b47b84373" containerName="nova-metadata-log" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.990593 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.994350 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.994590 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:23.998904 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.007843 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.010444 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.015110 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.031908 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.054154 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-config-data\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.054260 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgsgj\" (UniqueName: \"kubernetes.io/projected/728a2a3b-3f68-40cb-9498-ebd7aab38533-kube-api-access-kgsgj\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.054286 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wntv4\" (UniqueName: \"kubernetes.io/projected/2975cea1-64d5-478a-93dc-bf0a82b75277-kube-api-access-wntv4\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.054308 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2975cea1-64d5-478a-93dc-bf0a82b75277-config-data\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.054382 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2975cea1-64d5-478a-93dc-bf0a82b75277-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.054483 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2975cea1-64d5-478a-93dc-bf0a82b75277-logs\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.054597 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/728a2a3b-3f68-40cb-9498-ebd7aab38533-logs\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.054685 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.054740 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2975cea1-64d5-478a-93dc-bf0a82b75277-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.157193 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.157252 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2975cea1-64d5-478a-93dc-bf0a82b75277-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.157340 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-config-data\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.157388 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgsgj\" (UniqueName: \"kubernetes.io/projected/728a2a3b-3f68-40cb-9498-ebd7aab38533-kube-api-access-kgsgj\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.157417 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wntv4\" (UniqueName: \"kubernetes.io/projected/2975cea1-64d5-478a-93dc-bf0a82b75277-kube-api-access-wntv4\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.157438 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2975cea1-64d5-478a-93dc-bf0a82b75277-config-data\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.157510 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2975cea1-64d5-478a-93dc-bf0a82b75277-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.157531 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2975cea1-64d5-478a-93dc-bf0a82b75277-logs\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.157563 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/728a2a3b-3f68-40cb-9498-ebd7aab38533-logs\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.159616 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2975cea1-64d5-478a-93dc-bf0a82b75277-logs\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.159850 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/728a2a3b-3f68-40cb-9498-ebd7aab38533-logs\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.163416 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-config-data\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.163416 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2975cea1-64d5-478a-93dc-bf0a82b75277-config-data\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.163947 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2975cea1-64d5-478a-93dc-bf0a82b75277-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.165442 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.166949 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2975cea1-64d5-478a-93dc-bf0a82b75277-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.180235 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgsgj\" (UniqueName: \"kubernetes.io/projected/728a2a3b-3f68-40cb-9498-ebd7aab38533-kube-api-access-kgsgj\") pod \"nova-api-0\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.182377 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wntv4\" (UniqueName: \"kubernetes.io/projected/2975cea1-64d5-478a-93dc-bf0a82b75277-kube-api-access-wntv4\") pod \"nova-metadata-0\" (UID: \"2975cea1-64d5-478a-93dc-bf0a82b75277\") " pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.320986 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.344595 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.785786 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.895545 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2975cea1-64d5-478a-93dc-bf0a82b75277","Type":"ContainerStarted","Data":"e05c75c2f73c625140d3ef76e1e3ac28a7acf68efda68bb39ef39d9097ab670a"} Jan 05 23:29:24 crc kubenswrapper[5034]: W0105 23:29:24.908851 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod728a2a3b_3f68_40cb_9498_ebd7aab38533.slice/crio-e756dc512ed60b070d490b0a449a9a475fa06a2e7c9799cd41f6184aa331d79c WatchSource:0}: Error finding container e756dc512ed60b070d490b0a449a9a475fa06a2e7c9799cd41f6184aa331d79c: Status 404 returned error can't find the container with id e756dc512ed60b070d490b0a449a9a475fa06a2e7c9799cd41f6184aa331d79c Jan 05 23:29:24 crc kubenswrapper[5034]: I0105 23:29:24.909378 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:25 crc kubenswrapper[5034]: I0105 23:29:25.841041 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:29:25 crc kubenswrapper[5034]: E0105 23:29:25.841749 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:29:25 crc kubenswrapper[5034]: I0105 23:29:25.848843 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72500551-77bd-417e-b221-1f1b47b84373" path="/var/lib/kubelet/pods/72500551-77bd-417e-b221-1f1b47b84373/volumes" Jan 05 23:29:25 crc kubenswrapper[5034]: I0105 23:29:25.849533 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f16a9d1-a973-4a85-b31a-86f7432d41d4" path="/var/lib/kubelet/pods/8f16a9d1-a973-4a85-b31a-86f7432d41d4/volumes" Jan 05 23:29:25 crc kubenswrapper[5034]: I0105 23:29:25.906990 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2975cea1-64d5-478a-93dc-bf0a82b75277","Type":"ContainerStarted","Data":"6a0a3da0b9f5e05066b8e0afa24ce3fc94984cd470761de97ba25a963088be36"} Jan 05 23:29:25 crc kubenswrapper[5034]: I0105 23:29:25.907040 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2975cea1-64d5-478a-93dc-bf0a82b75277","Type":"ContainerStarted","Data":"055ff622ac8fe5f3bf24a864cff1501ac478030a3691a4002ce7065e4f3e899c"} Jan 05 23:29:25 crc kubenswrapper[5034]: I0105 23:29:25.908870 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"728a2a3b-3f68-40cb-9498-ebd7aab38533","Type":"ContainerStarted","Data":"b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e"} Jan 05 23:29:25 crc kubenswrapper[5034]: I0105 23:29:25.908918 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"728a2a3b-3f68-40cb-9498-ebd7aab38533","Type":"ContainerStarted","Data":"ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a"} Jan 05 23:29:25 crc kubenswrapper[5034]: I0105 23:29:25.908932 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"728a2a3b-3f68-40cb-9498-ebd7aab38533","Type":"ContainerStarted","Data":"e756dc512ed60b070d490b0a449a9a475fa06a2e7c9799cd41f6184aa331d79c"} Jan 05 23:29:25 crc kubenswrapper[5034]: I0105 23:29:25.925450 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.925429175 podStartE2EDuration="2.925429175s" podCreationTimestamp="2026-01-05 23:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:29:25.921448123 +0000 UTC m=+5858.293447582" watchObservedRunningTime="2026-01-05 23:29:25.925429175 +0000 UTC m=+5858.297428614" Jan 05 23:29:25 crc kubenswrapper[5034]: I0105 23:29:25.940582 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.940557175 podStartE2EDuration="2.940557175s" podCreationTimestamp="2026-01-05 23:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:29:25.938923298 +0000 UTC m=+5858.310922747" watchObservedRunningTime="2026-01-05 23:29:25.940557175 +0000 UTC m=+5858.312556614" Jan 05 23:29:27 crc kubenswrapper[5034]: I0105 23:29:27.225843 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 23:29:29 crc kubenswrapper[5034]: I0105 23:29:29.321623 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 23:29:29 crc kubenswrapper[5034]: I0105 23:29:29.322037 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 23:29:32 crc kubenswrapper[5034]: I0105 23:29:32.226327 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 05 23:29:32 crc kubenswrapper[5034]: I0105 23:29:32.256279 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 05 23:29:32 crc kubenswrapper[5034]: I0105 23:29:32.993906 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 05 23:29:34 crc kubenswrapper[5034]: I0105 23:29:34.321506 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 23:29:34 crc kubenswrapper[5034]: I0105 23:29:34.322177 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 23:29:34 crc kubenswrapper[5034]: I0105 23:29:34.345811 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 23:29:34 crc kubenswrapper[5034]: I0105 23:29:34.345904 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 23:29:35 crc kubenswrapper[5034]: I0105 23:29:35.340270 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2975cea1-64d5-478a-93dc-bf0a82b75277" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.90:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 23:29:35 crc kubenswrapper[5034]: I0105 23:29:35.340297 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2975cea1-64d5-478a-93dc-bf0a82b75277" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.90:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 23:29:35 crc kubenswrapper[5034]: I0105 23:29:35.428326 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.91:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 23:29:35 crc kubenswrapper[5034]: I0105 23:29:35.428499 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.91:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 23:29:37 crc kubenswrapper[5034]: I0105 23:29:37.844807 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:29:37 crc kubenswrapper[5034]: E0105 23:29:37.847821 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:29:44 crc kubenswrapper[5034]: I0105 23:29:44.329862 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 23:29:44 crc kubenswrapper[5034]: I0105 23:29:44.330685 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 23:29:44 crc kubenswrapper[5034]: I0105 23:29:44.346178 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 23:29:44 crc kubenswrapper[5034]: I0105 23:29:44.346364 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 23:29:44 crc kubenswrapper[5034]: I0105 23:29:44.354323 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 23:29:44 crc kubenswrapper[5034]: I0105 23:29:44.354692 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 23:29:44 crc kubenswrapper[5034]: I0105 23:29:44.357998 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 23:29:44 crc kubenswrapper[5034]: I0105 23:29:44.358123 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.095037 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.100456 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.287948 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8689fb8b95-928kk"] Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.296749 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.307321 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8689fb8b95-928kk"] Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.452883 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sx6q\" (UniqueName: \"kubernetes.io/projected/4a51c571-ded5-40bc-a904-e3ed1dc7affb-kube-api-access-4sx6q\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.453252 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-dns-svc\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.453329 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-sb\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.453468 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-nb\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.453493 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-config\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.555734 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-dns-svc\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.556146 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-sb\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.556250 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-nb\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.556275 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-config\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.556346 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sx6q\" (UniqueName: \"kubernetes.io/projected/4a51c571-ded5-40bc-a904-e3ed1dc7affb-kube-api-access-4sx6q\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.557254 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-dns-svc\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.557305 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-sb\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.557353 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-nb\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.557678 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-config\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.593532 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sx6q\" (UniqueName: \"kubernetes.io/projected/4a51c571-ded5-40bc-a904-e3ed1dc7affb-kube-api-access-4sx6q\") pod \"dnsmasq-dns-8689fb8b95-928kk\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:45 crc kubenswrapper[5034]: I0105 23:29:45.623013 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:46 crc kubenswrapper[5034]: I0105 23:29:46.106831 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8689fb8b95-928kk"] Jan 05 23:29:47 crc kubenswrapper[5034]: I0105 23:29:47.125846 5034 generic.go:334] "Generic (PLEG): container finished" podID="4a51c571-ded5-40bc-a904-e3ed1dc7affb" containerID="ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a" exitCode=0 Jan 05 23:29:47 crc kubenswrapper[5034]: I0105 23:29:47.127419 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" event={"ID":"4a51c571-ded5-40bc-a904-e3ed1dc7affb","Type":"ContainerDied","Data":"ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a"} Jan 05 23:29:47 crc kubenswrapper[5034]: I0105 23:29:47.127447 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" event={"ID":"4a51c571-ded5-40bc-a904-e3ed1dc7affb","Type":"ContainerStarted","Data":"53de7554bb4ad519bd4c0f43090c594711c28a2f17e80a1bf91ce05ffb76d708"} Jan 05 23:29:48 crc kubenswrapper[5034]: I0105 23:29:48.139359 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" event={"ID":"4a51c571-ded5-40bc-a904-e3ed1dc7affb","Type":"ContainerStarted","Data":"a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7"} Jan 05 23:29:48 crc kubenswrapper[5034]: I0105 23:29:48.139673 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:48 crc kubenswrapper[5034]: I0105 23:29:48.155605 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:48 crc kubenswrapper[5034]: I0105 23:29:48.156517 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerName="nova-api-log" containerID="cri-o://ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a" gracePeriod=30 Jan 05 23:29:48 crc kubenswrapper[5034]: I0105 23:29:48.156606 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerName="nova-api-api" containerID="cri-o://b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e" gracePeriod=30 Jan 05 23:29:48 crc kubenswrapper[5034]: I0105 23:29:48.195614 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" podStartSLOduration=3.19558147 podStartE2EDuration="3.19558147s" podCreationTimestamp="2026-01-05 23:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:29:48.186997827 +0000 UTC m=+5880.558997266" watchObservedRunningTime="2026-01-05 23:29:48.19558147 +0000 UTC m=+5880.567580909" Jan 05 23:29:48 crc kubenswrapper[5034]: I0105 23:29:48.838471 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:29:48 crc kubenswrapper[5034]: E0105 23:29:48.838972 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:29:49 crc kubenswrapper[5034]: I0105 23:29:49.155456 5034 generic.go:334] "Generic (PLEG): container finished" podID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerID="ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a" exitCode=143 Jan 05 23:29:49 crc kubenswrapper[5034]: I0105 23:29:49.155479 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"728a2a3b-3f68-40cb-9498-ebd7aab38533","Type":"ContainerDied","Data":"ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a"} Jan 05 23:29:51 crc kubenswrapper[5034]: I0105 23:29:51.783736 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:29:51 crc kubenswrapper[5034]: I0105 23:29:51.898470 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/728a2a3b-3f68-40cb-9498-ebd7aab38533-logs\") pod \"728a2a3b-3f68-40cb-9498-ebd7aab38533\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " Jan 05 23:29:51 crc kubenswrapper[5034]: I0105 23:29:51.898648 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-combined-ca-bundle\") pod \"728a2a3b-3f68-40cb-9498-ebd7aab38533\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " Jan 05 23:29:51 crc kubenswrapper[5034]: I0105 23:29:51.898857 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgsgj\" (UniqueName: \"kubernetes.io/projected/728a2a3b-3f68-40cb-9498-ebd7aab38533-kube-api-access-kgsgj\") pod \"728a2a3b-3f68-40cb-9498-ebd7aab38533\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " Jan 05 23:29:51 crc kubenswrapper[5034]: I0105 23:29:51.899010 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-config-data\") pod \"728a2a3b-3f68-40cb-9498-ebd7aab38533\" (UID: \"728a2a3b-3f68-40cb-9498-ebd7aab38533\") " Jan 05 23:29:51 crc kubenswrapper[5034]: I0105 23:29:51.899015 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/728a2a3b-3f68-40cb-9498-ebd7aab38533-logs" (OuterVolumeSpecName: "logs") pod "728a2a3b-3f68-40cb-9498-ebd7aab38533" (UID: "728a2a3b-3f68-40cb-9498-ebd7aab38533"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:29:51 crc kubenswrapper[5034]: I0105 23:29:51.899638 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/728a2a3b-3f68-40cb-9498-ebd7aab38533-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:51 crc kubenswrapper[5034]: I0105 23:29:51.904937 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728a2a3b-3f68-40cb-9498-ebd7aab38533-kube-api-access-kgsgj" (OuterVolumeSpecName: "kube-api-access-kgsgj") pod "728a2a3b-3f68-40cb-9498-ebd7aab38533" (UID: "728a2a3b-3f68-40cb-9498-ebd7aab38533"). InnerVolumeSpecName "kube-api-access-kgsgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:29:51 crc kubenswrapper[5034]: I0105 23:29:51.928027 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-config-data" (OuterVolumeSpecName: "config-data") pod "728a2a3b-3f68-40cb-9498-ebd7aab38533" (UID: "728a2a3b-3f68-40cb-9498-ebd7aab38533"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:51 crc kubenswrapper[5034]: I0105 23:29:51.933923 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "728a2a3b-3f68-40cb-9498-ebd7aab38533" (UID: "728a2a3b-3f68-40cb-9498-ebd7aab38533"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.001683 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.001726 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728a2a3b-3f68-40cb-9498-ebd7aab38533-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.001742 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgsgj\" (UniqueName: \"kubernetes.io/projected/728a2a3b-3f68-40cb-9498-ebd7aab38533-kube-api-access-kgsgj\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.180910 5034 generic.go:334] "Generic (PLEG): container finished" podID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerID="b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e" exitCode=0 Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.180970 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"728a2a3b-3f68-40cb-9498-ebd7aab38533","Type":"ContainerDied","Data":"b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e"} Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.181016 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"728a2a3b-3f68-40cb-9498-ebd7aab38533","Type":"ContainerDied","Data":"e756dc512ed60b070d490b0a449a9a475fa06a2e7c9799cd41f6184aa331d79c"} Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.181038 5034 scope.go:117] "RemoveContainer" containerID="b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.181214 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.215233 5034 scope.go:117] "RemoveContainer" containerID="ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.222849 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.252670 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.257979 5034 scope.go:117] "RemoveContainer" containerID="b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e" Jan 05 23:29:52 crc kubenswrapper[5034]: E0105 23:29:52.258482 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e\": container with ID starting with b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e not found: ID does not exist" containerID="b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.258527 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e"} err="failed to get container status \"b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e\": rpc error: code = NotFound desc = could not find container \"b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e\": container with ID starting with b969503a884e749864e228eb35c079989b5ec78054b9c2b91f9cfb5d3640489e not found: ID does not exist" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.258561 5034 scope.go:117] "RemoveContainer" containerID="ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a" Jan 05 23:29:52 crc kubenswrapper[5034]: E0105 23:29:52.258882 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a\": container with ID starting with ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a not found: ID does not exist" containerID="ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.258939 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a"} err="failed to get container status \"ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a\": rpc error: code = NotFound desc = could not find container \"ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a\": container with ID starting with ba40cd831972d89c3233a616073a41baf7a326d058e2c9541cce88e2c485804a not found: ID does not exist" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.270314 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:52 crc kubenswrapper[5034]: E0105 23:29:52.270698 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerName="nova-api-api" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.270710 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerName="nova-api-api" Jan 05 23:29:52 crc kubenswrapper[5034]: E0105 23:29:52.270725 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerName="nova-api-log" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.270731 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerName="nova-api-log" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.270915 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerName="nova-api-log" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.270936 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" containerName="nova-api-api" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.271979 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.275626 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.275922 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.276034 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.283029 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.308481 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.308547 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjhg\" (UniqueName: \"kubernetes.io/projected/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-kube-api-access-5cjhg\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.308625 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.308648 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.308716 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-logs\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.308739 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-config-data\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.411398 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-logs\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.411460 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-config-data\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.411505 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.411547 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjhg\" (UniqueName: \"kubernetes.io/projected/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-kube-api-access-5cjhg\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.411639 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.411667 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.411910 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-logs\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.417923 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.417985 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.418045 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.418233 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-config-data\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.427947 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjhg\" (UniqueName: \"kubernetes.io/projected/71c3861d-c8e7-48ba-a4bb-6d40369e62d9-kube-api-access-5cjhg\") pod \"nova-api-0\" (UID: \"71c3861d-c8e7-48ba-a4bb-6d40369e62d9\") " pod="openstack/nova-api-0" Jan 05 23:29:52 crc kubenswrapper[5034]: I0105 23:29:52.588172 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 23:29:53 crc kubenswrapper[5034]: I0105 23:29:53.080991 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 23:29:53 crc kubenswrapper[5034]: I0105 23:29:53.194631 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71c3861d-c8e7-48ba-a4bb-6d40369e62d9","Type":"ContainerStarted","Data":"2ce20e75c5cae9e96dc47dc6f874702c9a48b10d251c6b891b2d943f94eccc7e"} Jan 05 23:29:53 crc kubenswrapper[5034]: I0105 23:29:53.856756 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="728a2a3b-3f68-40cb-9498-ebd7aab38533" path="/var/lib/kubelet/pods/728a2a3b-3f68-40cb-9498-ebd7aab38533/volumes" Jan 05 23:29:54 crc kubenswrapper[5034]: I0105 23:29:54.208528 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71c3861d-c8e7-48ba-a4bb-6d40369e62d9","Type":"ContainerStarted","Data":"65e4116efe14b59d078c4bcf6fcad578c5121c736967ce4b93f5df9eef781f10"} Jan 05 23:29:54 crc kubenswrapper[5034]: I0105 23:29:54.208593 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71c3861d-c8e7-48ba-a4bb-6d40369e62d9","Type":"ContainerStarted","Data":"85eaffc1ea38905d66d77edb45d573a62ca553cc2550abcc807a0f2334f2860f"} Jan 05 23:29:54 crc kubenswrapper[5034]: I0105 23:29:54.238996 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.238961158 podStartE2EDuration="2.238961158s" podCreationTimestamp="2026-01-05 23:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:29:54.229546021 +0000 UTC m=+5886.601545460" watchObservedRunningTime="2026-01-05 23:29:54.238961158 +0000 UTC m=+5886.610960597" Jan 05 23:29:55 crc kubenswrapper[5034]: I0105 23:29:55.624602 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:29:55 crc kubenswrapper[5034]: I0105 23:29:55.687282 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79684879-jpnp6"] Jan 05 23:29:55 crc kubenswrapper[5034]: I0105 23:29:55.687554 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" podUID="9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" containerName="dnsmasq-dns" containerID="cri-o://9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23" gracePeriod=10 Jan 05 23:29:55 crc kubenswrapper[5034]: E0105 23:29:55.805543 5034 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9bfe66_27e7_40cb_bf42_a1e3f673eb97.slice/crio-conmon-9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23.scope\": RecentStats: unable to find data in memory cache]" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.232039 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.235596 5034 generic.go:334] "Generic (PLEG): container finished" podID="9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" containerID="9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23" exitCode=0 Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.235666 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.235683 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" event={"ID":"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97","Type":"ContainerDied","Data":"9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23"} Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.235786 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79684879-jpnp6" event={"ID":"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97","Type":"ContainerDied","Data":"c2f7a562fcf602d1ed4ed0128c0acc00b64017648844c4dd8cb912b997f9c50c"} Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.235822 5034 scope.go:117] "RemoveContainer" containerID="9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.268099 5034 scope.go:117] "RemoveContainer" containerID="01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.332955 5034 scope.go:117] "RemoveContainer" containerID="9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23" Jan 05 23:29:56 crc kubenswrapper[5034]: E0105 23:29:56.335272 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23\": container with ID starting with 9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23 not found: ID does not exist" containerID="9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.335311 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23"} err="failed to get container status \"9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23\": rpc error: code = NotFound desc = could not find container \"9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23\": container with ID starting with 9581ef86b413130687e5326472d46826dcdb3562eff6f1a79202230c78739b23 not found: ID does not exist" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.335338 5034 scope.go:117] "RemoveContainer" containerID="01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f" Jan 05 23:29:56 crc kubenswrapper[5034]: E0105 23:29:56.339167 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f\": container with ID starting with 01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f not found: ID does not exist" containerID="01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.339196 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f"} err="failed to get container status \"01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f\": rpc error: code = NotFound desc = could not find container \"01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f\": container with ID starting with 01d93b6b6b962958786e77c22cb21eb4c3f3a8fb889dedd6d126024450c99c3f not found: ID does not exist" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.436334 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-dns-svc\") pod \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.436445 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-nb\") pod \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.436511 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66dhw\" (UniqueName: \"kubernetes.io/projected/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-kube-api-access-66dhw\") pod \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.436558 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-config\") pod \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.436580 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-sb\") pod \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\" (UID: \"9b9bfe66-27e7-40cb-bf42-a1e3f673eb97\") " Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.443202 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-kube-api-access-66dhw" (OuterVolumeSpecName: "kube-api-access-66dhw") pod "9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" (UID: "9b9bfe66-27e7-40cb-bf42-a1e3f673eb97"). InnerVolumeSpecName "kube-api-access-66dhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.489544 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" (UID: "9b9bfe66-27e7-40cb-bf42-a1e3f673eb97"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.496340 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" (UID: "9b9bfe66-27e7-40cb-bf42-a1e3f673eb97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.496906 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-config" (OuterVolumeSpecName: "config") pod "9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" (UID: "9b9bfe66-27e7-40cb-bf42-a1e3f673eb97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.497710 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" (UID: "9b9bfe66-27e7-40cb-bf42-a1e3f673eb97"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.539581 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.539905 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.540025 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66dhw\" (UniqueName: \"kubernetes.io/projected/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-kube-api-access-66dhw\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.540689 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.540800 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.591051 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79684879-jpnp6"] Jan 05 23:29:56 crc kubenswrapper[5034]: I0105 23:29:56.601339 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79684879-jpnp6"] Jan 05 23:29:57 crc kubenswrapper[5034]: I0105 23:29:57.851342 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" path="/var/lib/kubelet/pods/9b9bfe66-27e7-40cb-bf42-a1e3f673eb97/volumes" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.174643 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl"] Jan 05 23:30:00 crc kubenswrapper[5034]: E0105 23:30:00.176139 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" containerName="init" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.176156 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" containerName="init" Jan 05 23:30:00 crc kubenswrapper[5034]: E0105 23:30:00.176166 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" containerName="dnsmasq-dns" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.176177 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" containerName="dnsmasq-dns" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.176444 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9bfe66-27e7-40cb-bf42-a1e3f673eb97" containerName="dnsmasq-dns" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.177475 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.180428 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.182395 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.187189 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl"] Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.352517 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0061d33-aff7-4f88-a29f-f452769fcf8f-config-volume\") pod \"collect-profiles-29460930-w92dl\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.352930 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0061d33-aff7-4f88-a29f-f452769fcf8f-secret-volume\") pod \"collect-profiles-29460930-w92dl\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.353001 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr2xp\" (UniqueName: \"kubernetes.io/projected/e0061d33-aff7-4f88-a29f-f452769fcf8f-kube-api-access-tr2xp\") pod \"collect-profiles-29460930-w92dl\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.455598 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0061d33-aff7-4f88-a29f-f452769fcf8f-secret-volume\") pod \"collect-profiles-29460930-w92dl\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.455693 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr2xp\" (UniqueName: \"kubernetes.io/projected/e0061d33-aff7-4f88-a29f-f452769fcf8f-kube-api-access-tr2xp\") pod \"collect-profiles-29460930-w92dl\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.455825 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0061d33-aff7-4f88-a29f-f452769fcf8f-config-volume\") pod \"collect-profiles-29460930-w92dl\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.457573 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0061d33-aff7-4f88-a29f-f452769fcf8f-config-volume\") pod \"collect-profiles-29460930-w92dl\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.485104 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr2xp\" (UniqueName: \"kubernetes.io/projected/e0061d33-aff7-4f88-a29f-f452769fcf8f-kube-api-access-tr2xp\") pod \"collect-profiles-29460930-w92dl\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.486522 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0061d33-aff7-4f88-a29f-f452769fcf8f-secret-volume\") pod \"collect-profiles-29460930-w92dl\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.506550 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.838839 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:30:00 crc kubenswrapper[5034]: E0105 23:30:00.839571 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:30:00 crc kubenswrapper[5034]: W0105 23:30:00.981337 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0061d33_aff7_4f88_a29f_f452769fcf8f.slice/crio-1d776850350b0ad178580018b913f035066bccbaba46b5995bb7ae71ef40a121 WatchSource:0}: Error finding container 1d776850350b0ad178580018b913f035066bccbaba46b5995bb7ae71ef40a121: Status 404 returned error can't find the container with id 1d776850350b0ad178580018b913f035066bccbaba46b5995bb7ae71ef40a121 Jan 05 23:30:00 crc kubenswrapper[5034]: I0105 23:30:00.984370 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl"] Jan 05 23:30:01 crc kubenswrapper[5034]: I0105 23:30:01.288713 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" event={"ID":"e0061d33-aff7-4f88-a29f-f452769fcf8f","Type":"ContainerStarted","Data":"b9a73428f93930317169c6ea9eb8abda9c4673ba47411ef619a36a5d9ec3af14"} Jan 05 23:30:01 crc kubenswrapper[5034]: I0105 23:30:01.289200 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" event={"ID":"e0061d33-aff7-4f88-a29f-f452769fcf8f","Type":"ContainerStarted","Data":"1d776850350b0ad178580018b913f035066bccbaba46b5995bb7ae71ef40a121"} Jan 05 23:30:01 crc kubenswrapper[5034]: I0105 23:30:01.319159 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" podStartSLOduration=1.319115608 podStartE2EDuration="1.319115608s" podCreationTimestamp="2026-01-05 23:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:30:01.305154642 +0000 UTC m=+5893.677154101" watchObservedRunningTime="2026-01-05 23:30:01.319115608 +0000 UTC m=+5893.691115037" Jan 05 23:30:02 crc kubenswrapper[5034]: I0105 23:30:02.299835 5034 generic.go:334] "Generic (PLEG): container finished" podID="e0061d33-aff7-4f88-a29f-f452769fcf8f" containerID="b9a73428f93930317169c6ea9eb8abda9c4673ba47411ef619a36a5d9ec3af14" exitCode=0 Jan 05 23:30:02 crc kubenswrapper[5034]: I0105 23:30:02.299897 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" event={"ID":"e0061d33-aff7-4f88-a29f-f452769fcf8f","Type":"ContainerDied","Data":"b9a73428f93930317169c6ea9eb8abda9c4673ba47411ef619a36a5d9ec3af14"} Jan 05 23:30:02 crc kubenswrapper[5034]: I0105 23:30:02.589403 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 23:30:02 crc kubenswrapper[5034]: I0105 23:30:02.589487 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.604310 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71c3861d-c8e7-48ba-a4bb-6d40369e62d9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.93:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.605526 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71c3861d-c8e7-48ba-a4bb-6d40369e62d9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.93:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.720766 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.764801 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0061d33-aff7-4f88-a29f-f452769fcf8f-secret-volume\") pod \"e0061d33-aff7-4f88-a29f-f452769fcf8f\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.765044 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr2xp\" (UniqueName: \"kubernetes.io/projected/e0061d33-aff7-4f88-a29f-f452769fcf8f-kube-api-access-tr2xp\") pod \"e0061d33-aff7-4f88-a29f-f452769fcf8f\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.765090 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0061d33-aff7-4f88-a29f-f452769fcf8f-config-volume\") pod \"e0061d33-aff7-4f88-a29f-f452769fcf8f\" (UID: \"e0061d33-aff7-4f88-a29f-f452769fcf8f\") " Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.765795 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0061d33-aff7-4f88-a29f-f452769fcf8f-config-volume" (OuterVolumeSpecName: "config-volume") pod "e0061d33-aff7-4f88-a29f-f452769fcf8f" (UID: "e0061d33-aff7-4f88-a29f-f452769fcf8f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.772618 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0061d33-aff7-4f88-a29f-f452769fcf8f-kube-api-access-tr2xp" (OuterVolumeSpecName: "kube-api-access-tr2xp") pod "e0061d33-aff7-4f88-a29f-f452769fcf8f" (UID: "e0061d33-aff7-4f88-a29f-f452769fcf8f"). InnerVolumeSpecName "kube-api-access-tr2xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.773184 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0061d33-aff7-4f88-a29f-f452769fcf8f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e0061d33-aff7-4f88-a29f-f452769fcf8f" (UID: "e0061d33-aff7-4f88-a29f-f452769fcf8f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.866859 5034 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0061d33-aff7-4f88-a29f-f452769fcf8f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.867314 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr2xp\" (UniqueName: \"kubernetes.io/projected/e0061d33-aff7-4f88-a29f-f452769fcf8f-kube-api-access-tr2xp\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:03 crc kubenswrapper[5034]: I0105 23:30:03.867327 5034 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0061d33-aff7-4f88-a29f-f452769fcf8f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:04 crc kubenswrapper[5034]: I0105 23:30:04.319166 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" event={"ID":"e0061d33-aff7-4f88-a29f-f452769fcf8f","Type":"ContainerDied","Data":"1d776850350b0ad178580018b913f035066bccbaba46b5995bb7ae71ef40a121"} Jan 05 23:30:04 crc kubenswrapper[5034]: I0105 23:30:04.319306 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d776850350b0ad178580018b913f035066bccbaba46b5995bb7ae71ef40a121" Jan 05 23:30:04 crc kubenswrapper[5034]: I0105 23:30:04.319457 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460930-w92dl" Jan 05 23:30:04 crc kubenswrapper[5034]: I0105 23:30:04.401030 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x"] Jan 05 23:30:04 crc kubenswrapper[5034]: I0105 23:30:04.411890 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460885-kc52x"] Jan 05 23:30:05 crc kubenswrapper[5034]: I0105 23:30:05.862505 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="340556b0-d1f9-4e04-bd8e-cd2973e35c37" path="/var/lib/kubelet/pods/340556b0-d1f9-4e04-bd8e-cd2973e35c37/volumes" Jan 05 23:30:12 crc kubenswrapper[5034]: I0105 23:30:12.599073 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 23:30:12 crc kubenswrapper[5034]: I0105 23:30:12.600679 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 23:30:12 crc kubenswrapper[5034]: I0105 23:30:12.600801 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 23:30:12 crc kubenswrapper[5034]: I0105 23:30:12.613203 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 23:30:12 crc kubenswrapper[5034]: I0105 23:30:12.838681 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:30:12 crc kubenswrapper[5034]: E0105 23:30:12.838948 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:30:13 crc kubenswrapper[5034]: I0105 23:30:13.410639 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 23:30:13 crc kubenswrapper[5034]: I0105 23:30:13.424306 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 23:30:27 crc kubenswrapper[5034]: I0105 23:30:27.865458 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:30:27 crc kubenswrapper[5034]: E0105 23:30:27.866768 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:30:34 crc kubenswrapper[5034]: I0105 23:30:34.057335 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7n6zf"] Jan 05 23:30:34 crc kubenswrapper[5034]: I0105 23:30:34.066285 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-71ef-account-create-update-7wzzf"] Jan 05 23:30:34 crc kubenswrapper[5034]: I0105 23:30:34.074443 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-71ef-account-create-update-7wzzf"] Jan 05 23:30:34 crc kubenswrapper[5034]: I0105 23:30:34.083482 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7n6zf"] Jan 05 23:30:35 crc kubenswrapper[5034]: I0105 23:30:35.855639 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72f5d690-a7b2-4057-bc84-4108941c17ca" path="/var/lib/kubelet/pods/72f5d690-a7b2-4057-bc84-4108941c17ca/volumes" Jan 05 23:30:35 crc kubenswrapper[5034]: I0105 23:30:35.856525 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae531dad-14d8-4872-8542-8b1c6fd9e388" path="/var/lib/kubelet/pods/ae531dad-14d8-4872-8542-8b1c6fd9e388/volumes" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.839226 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:30:38 crc kubenswrapper[5034]: E0105 23:30:38.839950 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.889500 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-w6vqp"] Jan 05 23:30:38 crc kubenswrapper[5034]: E0105 23:30:38.890003 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0061d33-aff7-4f88-a29f-f452769fcf8f" containerName="collect-profiles" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.890031 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0061d33-aff7-4f88-a29f-f452769fcf8f" containerName="collect-profiles" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.890308 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0061d33-aff7-4f88-a29f-f452769fcf8f" containerName="collect-profiles" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.891209 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.895762 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.895799 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.896135 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dqvn2" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.904262 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7kpp9"] Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.906443 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.912821 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-w6vqp"] Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.927343 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7kpp9"] Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.960728 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-var-lib\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.960824 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-var-log-ovn\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.960851 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-var-run\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.960885 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-etc-ovs\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.960910 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-var-run-ovn\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.960932 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-scripts\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.961002 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-ovn-controller-tls-certs\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.961028 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwkw2\" (UniqueName: \"kubernetes.io/projected/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-kube-api-access-pwkw2\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.961052 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-var-run\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.961099 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-scripts\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.961156 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-combined-ca-bundle\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.961215 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-var-log\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:38 crc kubenswrapper[5034]: I0105 23:30:38.961285 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpwf5\" (UniqueName: \"kubernetes.io/projected/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-kube-api-access-tpwf5\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063466 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-ovn-controller-tls-certs\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063533 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwkw2\" (UniqueName: \"kubernetes.io/projected/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-kube-api-access-pwkw2\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063574 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-var-run\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063607 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-scripts\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063663 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-combined-ca-bundle\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063705 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-var-log\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063768 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpwf5\" (UniqueName: \"kubernetes.io/projected/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-kube-api-access-tpwf5\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063805 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-var-lib\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063838 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-var-log-ovn\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063867 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-var-run\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063905 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-etc-ovs\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063929 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-var-run-ovn\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.063953 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-scripts\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.064712 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-etc-ovs\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.064741 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-var-run\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.064741 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-var-run-ovn\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.064712 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-var-log\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.064772 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-var-lib\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.064765 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-var-run\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.064836 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-var-log-ovn\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.067172 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-scripts\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.068288 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-scripts\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.070742 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-ovn-controller-tls-certs\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.071946 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-combined-ca-bundle\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.087851 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpwf5\" (UniqueName: \"kubernetes.io/projected/0c05c173-ed68-4f0e-a223-649e5c2cb5f3-kube-api-access-tpwf5\") pod \"ovn-controller-ovs-7kpp9\" (UID: \"0c05c173-ed68-4f0e-a223-649e5c2cb5f3\") " pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.091147 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwkw2\" (UniqueName: \"kubernetes.io/projected/9fd531ae-4d59-4afe-aa01-6ad07a62b64c-kube-api-access-pwkw2\") pod \"ovn-controller-w6vqp\" (UID: \"9fd531ae-4d59-4afe-aa01-6ad07a62b64c\") " pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.268539 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.284884 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:39 crc kubenswrapper[5034]: I0105 23:30:39.895997 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-w6vqp"] Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.059349 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8qsjc"] Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.077598 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8qsjc"] Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.217355 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7kpp9"] Jan 05 23:30:40 crc kubenswrapper[5034]: W0105 23:30:40.228067 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c05c173_ed68_4f0e_a223_649e5c2cb5f3.slice/crio-501865a141d93f461d4b035d9fe8c39cdc2b8b9c4f3b2fa7a590b0ed75faa8af WatchSource:0}: Error finding container 501865a141d93f461d4b035d9fe8c39cdc2b8b9c4f3b2fa7a590b0ed75faa8af: Status 404 returned error can't find the container with id 501865a141d93f461d4b035d9fe8c39cdc2b8b9c4f3b2fa7a590b0ed75faa8af Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.525022 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-nnn7q"] Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.527363 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-nnn7q" Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.560424 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-nnn7q"] Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.613007 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrgbk\" (UniqueName: \"kubernetes.io/projected/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-kube-api-access-jrgbk\") pod \"octavia-db-create-nnn7q\" (UID: \"cc1caa25-b7ed-49cc-9adf-41a81873b4ea\") " pod="openstack/octavia-db-create-nnn7q" Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.613125 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-operator-scripts\") pod \"octavia-db-create-nnn7q\" (UID: \"cc1caa25-b7ed-49cc-9adf-41a81873b4ea\") " pod="openstack/octavia-db-create-nnn7q" Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.710891 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w6vqp" event={"ID":"9fd531ae-4d59-4afe-aa01-6ad07a62b64c","Type":"ContainerStarted","Data":"b3f4c17932128fe4462007f24b646761e93ab663abe9a477be627ab6aec03244"} Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.711411 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w6vqp" event={"ID":"9fd531ae-4d59-4afe-aa01-6ad07a62b64c","Type":"ContainerStarted","Data":"9464072c762d85d6ee896e91d60b8e0f9ad7b47a7df6c4a517cb4e6d62f1ade8"} Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.713577 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-w6vqp" Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.715893 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-operator-scripts\") pod \"octavia-db-create-nnn7q\" (UID: \"cc1caa25-b7ed-49cc-9adf-41a81873b4ea\") " pod="openstack/octavia-db-create-nnn7q" Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.716343 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrgbk\" (UniqueName: \"kubernetes.io/projected/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-kube-api-access-jrgbk\") pod \"octavia-db-create-nnn7q\" (UID: \"cc1caa25-b7ed-49cc-9adf-41a81873b4ea\") " pod="openstack/octavia-db-create-nnn7q" Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.717732 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-operator-scripts\") pod \"octavia-db-create-nnn7q\" (UID: \"cc1caa25-b7ed-49cc-9adf-41a81873b4ea\") " pod="openstack/octavia-db-create-nnn7q" Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.719926 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7kpp9" event={"ID":"0c05c173-ed68-4f0e-a223-649e5c2cb5f3","Type":"ContainerStarted","Data":"40f5c35c6707778c11a09b7ffb710d8d69638d5ce12a14632aa1e0bef6ca8d38"} Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.720014 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7kpp9" event={"ID":"0c05c173-ed68-4f0e-a223-649e5c2cb5f3","Type":"ContainerStarted","Data":"501865a141d93f461d4b035d9fe8c39cdc2b8b9c4f3b2fa7a590b0ed75faa8af"} Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.753597 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-w6vqp" podStartSLOduration=2.753569126 podStartE2EDuration="2.753569126s" podCreationTimestamp="2026-01-05 23:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:30:40.744255091 +0000 UTC m=+5933.116254520" watchObservedRunningTime="2026-01-05 23:30:40.753569126 +0000 UTC m=+5933.125568575" Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.755904 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrgbk\" (UniqueName: \"kubernetes.io/projected/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-kube-api-access-jrgbk\") pod \"octavia-db-create-nnn7q\" (UID: \"cc1caa25-b7ed-49cc-9adf-41a81873b4ea\") " pod="openstack/octavia-db-create-nnn7q" Jan 05 23:30:40 crc kubenswrapper[5034]: I0105 23:30:40.862658 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-nnn7q" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.349968 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-nnn7q"] Jan 05 23:30:41 crc kubenswrapper[5034]: W0105 23:30:41.358381 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc1caa25_b7ed_49cc_9adf_41a81873b4ea.slice/crio-bf630df124bfaa78db4cdf3d3e779206761bffde55cb9100afaa29cc830f4f6f WatchSource:0}: Error finding container bf630df124bfaa78db4cdf3d3e779206761bffde55cb9100afaa29cc830f4f6f: Status 404 returned error can't find the container with id bf630df124bfaa78db4cdf3d3e779206761bffde55cb9100afaa29cc830f4f6f Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.497250 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-ec43-account-create-update-j7d9r"] Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.498851 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ec43-account-create-update-j7d9r" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.503477 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.507992 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ec43-account-create-update-j7d9r"] Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.537760 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577a48c3-b291-4924-a70b-ce1ad07ea2b7-operator-scripts\") pod \"octavia-ec43-account-create-update-j7d9r\" (UID: \"577a48c3-b291-4924-a70b-ce1ad07ea2b7\") " pod="openstack/octavia-ec43-account-create-update-j7d9r" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.537905 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wndjm\" (UniqueName: \"kubernetes.io/projected/577a48c3-b291-4924-a70b-ce1ad07ea2b7-kube-api-access-wndjm\") pod \"octavia-ec43-account-create-update-j7d9r\" (UID: \"577a48c3-b291-4924-a70b-ce1ad07ea2b7\") " pod="openstack/octavia-ec43-account-create-update-j7d9r" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.586939 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-22df5"] Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.588999 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.594261 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.616084 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-22df5"] Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.640130 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f52c5c97-57a6-4509-bc03-46bb124297d2-ovs-rundir\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.640212 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wndjm\" (UniqueName: \"kubernetes.io/projected/577a48c3-b291-4924-a70b-ce1ad07ea2b7-kube-api-access-wndjm\") pod \"octavia-ec43-account-create-update-j7d9r\" (UID: \"577a48c3-b291-4924-a70b-ce1ad07ea2b7\") " pod="openstack/octavia-ec43-account-create-update-j7d9r" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.640257 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f52c5c97-57a6-4509-bc03-46bb124297d2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.640291 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f52c5c97-57a6-4509-bc03-46bb124297d2-config\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.640340 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f52c5c97-57a6-4509-bc03-46bb124297d2-ovn-rundir\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.640392 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52c5c97-57a6-4509-bc03-46bb124297d2-combined-ca-bundle\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.640412 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wflk4\" (UniqueName: \"kubernetes.io/projected/f52c5c97-57a6-4509-bc03-46bb124297d2-kube-api-access-wflk4\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.640442 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577a48c3-b291-4924-a70b-ce1ad07ea2b7-operator-scripts\") pod \"octavia-ec43-account-create-update-j7d9r\" (UID: \"577a48c3-b291-4924-a70b-ce1ad07ea2b7\") " pod="openstack/octavia-ec43-account-create-update-j7d9r" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.641677 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577a48c3-b291-4924-a70b-ce1ad07ea2b7-operator-scripts\") pod \"octavia-ec43-account-create-update-j7d9r\" (UID: \"577a48c3-b291-4924-a70b-ce1ad07ea2b7\") " pod="openstack/octavia-ec43-account-create-update-j7d9r" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.663681 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wndjm\" (UniqueName: \"kubernetes.io/projected/577a48c3-b291-4924-a70b-ce1ad07ea2b7-kube-api-access-wndjm\") pod \"octavia-ec43-account-create-update-j7d9r\" (UID: \"577a48c3-b291-4924-a70b-ce1ad07ea2b7\") " pod="openstack/octavia-ec43-account-create-update-j7d9r" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.732392 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-nnn7q" event={"ID":"cc1caa25-b7ed-49cc-9adf-41a81873b4ea","Type":"ContainerStarted","Data":"b942d7d6bd64d44eca46ef1276f7c0a285907164d6f1bde6b20b99981aced54c"} Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.732462 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-nnn7q" event={"ID":"cc1caa25-b7ed-49cc-9adf-41a81873b4ea","Type":"ContainerStarted","Data":"bf630df124bfaa78db4cdf3d3e779206761bffde55cb9100afaa29cc830f4f6f"} Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.734393 5034 generic.go:334] "Generic (PLEG): container finished" podID="0c05c173-ed68-4f0e-a223-649e5c2cb5f3" containerID="40f5c35c6707778c11a09b7ffb710d8d69638d5ce12a14632aa1e0bef6ca8d38" exitCode=0 Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.734439 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7kpp9" event={"ID":"0c05c173-ed68-4f0e-a223-649e5c2cb5f3","Type":"ContainerDied","Data":"40f5c35c6707778c11a09b7ffb710d8d69638d5ce12a14632aa1e0bef6ca8d38"} Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.742105 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f52c5c97-57a6-4509-bc03-46bb124297d2-ovs-rundir\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.742190 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f52c5c97-57a6-4509-bc03-46bb124297d2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.742254 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f52c5c97-57a6-4509-bc03-46bb124297d2-config\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.742299 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f52c5c97-57a6-4509-bc03-46bb124297d2-ovn-rundir\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.742342 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52c5c97-57a6-4509-bc03-46bb124297d2-combined-ca-bundle\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.742366 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wflk4\" (UniqueName: \"kubernetes.io/projected/f52c5c97-57a6-4509-bc03-46bb124297d2-kube-api-access-wflk4\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.742572 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f52c5c97-57a6-4509-bc03-46bb124297d2-ovs-rundir\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.742672 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f52c5c97-57a6-4509-bc03-46bb124297d2-ovn-rundir\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.743541 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f52c5c97-57a6-4509-bc03-46bb124297d2-config\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.758588 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52c5c97-57a6-4509-bc03-46bb124297d2-combined-ca-bundle\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.766091 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-create-nnn7q" podStartSLOduration=1.76606116 podStartE2EDuration="1.76606116s" podCreationTimestamp="2026-01-05 23:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:30:41.764758033 +0000 UTC m=+5934.136757472" watchObservedRunningTime="2026-01-05 23:30:41.76606116 +0000 UTC m=+5934.138060599" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.774308 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f52c5c97-57a6-4509-bc03-46bb124297d2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.776677 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wflk4\" (UniqueName: \"kubernetes.io/projected/f52c5c97-57a6-4509-bc03-46bb124297d2-kube-api-access-wflk4\") pod \"ovn-controller-metrics-22df5\" (UID: \"f52c5c97-57a6-4509-bc03-46bb124297d2\") " pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.855833 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219ee213-be92-46e6-ab65-d0dbc0d6ac85" path="/var/lib/kubelet/pods/219ee213-be92-46e6-ab65-d0dbc0d6ac85/volumes" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.859789 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ec43-account-create-update-j7d9r" Jan 05 23:30:41 crc kubenswrapper[5034]: I0105 23:30:41.916551 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-22df5" Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.418577 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ec43-account-create-update-j7d9r"] Jan 05 23:30:42 crc kubenswrapper[5034]: W0105 23:30:42.437825 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod577a48c3_b291_4924_a70b_ce1ad07ea2b7.slice/crio-8ec246ab3c7ebedb28a7a058fbde85ebf5325d5df0612c94ec05b57e5464a7ab WatchSource:0}: Error finding container 8ec246ab3c7ebedb28a7a058fbde85ebf5325d5df0612c94ec05b57e5464a7ab: Status 404 returned error can't find the container with id 8ec246ab3c7ebedb28a7a058fbde85ebf5325d5df0612c94ec05b57e5464a7ab Jan 05 23:30:42 crc kubenswrapper[5034]: W0105 23:30:42.537797 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf52c5c97_57a6_4509_bc03_46bb124297d2.slice/crio-4ce43f260de55a581a73d4a237a4543c1a23da4a5d17f874b1c075ae23f333c5 WatchSource:0}: Error finding container 4ce43f260de55a581a73d4a237a4543c1a23da4a5d17f874b1c075ae23f333c5: Status 404 returned error can't find the container with id 4ce43f260de55a581a73d4a237a4543c1a23da4a5d17f874b1c075ae23f333c5 Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.538031 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-22df5"] Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.761992 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-22df5" event={"ID":"f52c5c97-57a6-4509-bc03-46bb124297d2","Type":"ContainerStarted","Data":"4ce43f260de55a581a73d4a237a4543c1a23da4a5d17f874b1c075ae23f333c5"} Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.768636 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ec43-account-create-update-j7d9r" event={"ID":"577a48c3-b291-4924-a70b-ce1ad07ea2b7","Type":"ContainerStarted","Data":"8ec246ab3c7ebedb28a7a058fbde85ebf5325d5df0612c94ec05b57e5464a7ab"} Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.772879 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7kpp9" event={"ID":"0c05c173-ed68-4f0e-a223-649e5c2cb5f3","Type":"ContainerStarted","Data":"033bb3f2794d3d77d3905e0311bbdb405bb2faed708c99c13e38574606026cfd"} Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.772945 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7kpp9" event={"ID":"0c05c173-ed68-4f0e-a223-649e5c2cb5f3","Type":"ContainerStarted","Data":"28598c205361404b991540642d8a62f261eaf4584ce4f8a439f0b7fcc15669ba"} Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.774268 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.774366 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.785236 5034 generic.go:334] "Generic (PLEG): container finished" podID="cc1caa25-b7ed-49cc-9adf-41a81873b4ea" containerID="b942d7d6bd64d44eca46ef1276f7c0a285907164d6f1bde6b20b99981aced54c" exitCode=0 Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.785447 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-nnn7q" event={"ID":"cc1caa25-b7ed-49cc-9adf-41a81873b4ea","Type":"ContainerDied","Data":"b942d7d6bd64d44eca46ef1276f7c0a285907164d6f1bde6b20b99981aced54c"} Jan 05 23:30:42 crc kubenswrapper[5034]: I0105 23:30:42.808660 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7kpp9" podStartSLOduration=4.808640518 podStartE2EDuration="4.808640518s" podCreationTimestamp="2026-01-05 23:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:30:42.807371352 +0000 UTC m=+5935.179370801" watchObservedRunningTime="2026-01-05 23:30:42.808640518 +0000 UTC m=+5935.180639957" Jan 05 23:30:43 crc kubenswrapper[5034]: I0105 23:30:43.799637 5034 generic.go:334] "Generic (PLEG): container finished" podID="577a48c3-b291-4924-a70b-ce1ad07ea2b7" containerID="1a12659db1b9adca314a1d686372690b17a2b3086f12aeb0ca31ddb9f60af19d" exitCode=0 Jan 05 23:30:43 crc kubenswrapper[5034]: I0105 23:30:43.799741 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ec43-account-create-update-j7d9r" event={"ID":"577a48c3-b291-4924-a70b-ce1ad07ea2b7","Type":"ContainerDied","Data":"1a12659db1b9adca314a1d686372690b17a2b3086f12aeb0ca31ddb9f60af19d"} Jan 05 23:30:43 crc kubenswrapper[5034]: I0105 23:30:43.804208 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-22df5" event={"ID":"f52c5c97-57a6-4509-bc03-46bb124297d2","Type":"ContainerStarted","Data":"ed75ac58e74a9d2e61290610cf05faa2c2b2ba2d689d9f832e37c9a61905a177"} Jan 05 23:30:43 crc kubenswrapper[5034]: I0105 23:30:43.883061 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-22df5" podStartSLOduration=2.883033448 podStartE2EDuration="2.883033448s" podCreationTimestamp="2026-01-05 23:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:30:43.84186941 +0000 UTC m=+5936.213868849" watchObservedRunningTime="2026-01-05 23:30:43.883033448 +0000 UTC m=+5936.255032887" Jan 05 23:30:44 crc kubenswrapper[5034]: I0105 23:30:44.246915 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-nnn7q" Jan 05 23:30:44 crc kubenswrapper[5034]: I0105 23:30:44.302726 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrgbk\" (UniqueName: \"kubernetes.io/projected/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-kube-api-access-jrgbk\") pod \"cc1caa25-b7ed-49cc-9adf-41a81873b4ea\" (UID: \"cc1caa25-b7ed-49cc-9adf-41a81873b4ea\") " Jan 05 23:30:44 crc kubenswrapper[5034]: I0105 23:30:44.303033 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-operator-scripts\") pod \"cc1caa25-b7ed-49cc-9adf-41a81873b4ea\" (UID: \"cc1caa25-b7ed-49cc-9adf-41a81873b4ea\") " Jan 05 23:30:44 crc kubenswrapper[5034]: I0105 23:30:44.303864 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc1caa25-b7ed-49cc-9adf-41a81873b4ea" (UID: "cc1caa25-b7ed-49cc-9adf-41a81873b4ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:30:44 crc kubenswrapper[5034]: I0105 23:30:44.310048 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-kube-api-access-jrgbk" (OuterVolumeSpecName: "kube-api-access-jrgbk") pod "cc1caa25-b7ed-49cc-9adf-41a81873b4ea" (UID: "cc1caa25-b7ed-49cc-9adf-41a81873b4ea"). InnerVolumeSpecName "kube-api-access-jrgbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:30:44 crc kubenswrapper[5034]: I0105 23:30:44.405523 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:44 crc kubenswrapper[5034]: I0105 23:30:44.405566 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrgbk\" (UniqueName: \"kubernetes.io/projected/cc1caa25-b7ed-49cc-9adf-41a81873b4ea-kube-api-access-jrgbk\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:44 crc kubenswrapper[5034]: I0105 23:30:44.823111 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-nnn7q" Jan 05 23:30:44 crc kubenswrapper[5034]: I0105 23:30:44.825404 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-nnn7q" event={"ID":"cc1caa25-b7ed-49cc-9adf-41a81873b4ea","Type":"ContainerDied","Data":"bf630df124bfaa78db4cdf3d3e779206761bffde55cb9100afaa29cc830f4f6f"} Jan 05 23:30:44 crc kubenswrapper[5034]: I0105 23:30:44.825460 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf630df124bfaa78db4cdf3d3e779206761bffde55cb9100afaa29cc830f4f6f" Jan 05 23:30:45 crc kubenswrapper[5034]: I0105 23:30:45.239436 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ec43-account-create-update-j7d9r" Jan 05 23:30:45 crc kubenswrapper[5034]: I0105 23:30:45.331573 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577a48c3-b291-4924-a70b-ce1ad07ea2b7-operator-scripts\") pod \"577a48c3-b291-4924-a70b-ce1ad07ea2b7\" (UID: \"577a48c3-b291-4924-a70b-ce1ad07ea2b7\") " Jan 05 23:30:45 crc kubenswrapper[5034]: I0105 23:30:45.331748 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wndjm\" (UniqueName: \"kubernetes.io/projected/577a48c3-b291-4924-a70b-ce1ad07ea2b7-kube-api-access-wndjm\") pod \"577a48c3-b291-4924-a70b-ce1ad07ea2b7\" (UID: \"577a48c3-b291-4924-a70b-ce1ad07ea2b7\") " Jan 05 23:30:45 crc kubenswrapper[5034]: I0105 23:30:45.332510 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/577a48c3-b291-4924-a70b-ce1ad07ea2b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "577a48c3-b291-4924-a70b-ce1ad07ea2b7" (UID: "577a48c3-b291-4924-a70b-ce1ad07ea2b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:30:45 crc kubenswrapper[5034]: I0105 23:30:45.332666 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577a48c3-b291-4924-a70b-ce1ad07ea2b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:45 crc kubenswrapper[5034]: I0105 23:30:45.338034 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/577a48c3-b291-4924-a70b-ce1ad07ea2b7-kube-api-access-wndjm" (OuterVolumeSpecName: "kube-api-access-wndjm") pod "577a48c3-b291-4924-a70b-ce1ad07ea2b7" (UID: "577a48c3-b291-4924-a70b-ce1ad07ea2b7"). InnerVolumeSpecName "kube-api-access-wndjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:30:45 crc kubenswrapper[5034]: I0105 23:30:45.436457 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wndjm\" (UniqueName: \"kubernetes.io/projected/577a48c3-b291-4924-a70b-ce1ad07ea2b7-kube-api-access-wndjm\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:45 crc kubenswrapper[5034]: I0105 23:30:45.831815 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ec43-account-create-update-j7d9r" event={"ID":"577a48c3-b291-4924-a70b-ce1ad07ea2b7","Type":"ContainerDied","Data":"8ec246ab3c7ebedb28a7a058fbde85ebf5325d5df0612c94ec05b57e5464a7ab"} Jan 05 23:30:45 crc kubenswrapper[5034]: I0105 23:30:45.831872 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ec246ab3c7ebedb28a7a058fbde85ebf5325d5df0612c94ec05b57e5464a7ab" Jan 05 23:30:45 crc kubenswrapper[5034]: I0105 23:30:45.831895 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ec43-account-create-update-j7d9r" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.479529 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-sczmb"] Jan 05 23:30:47 crc kubenswrapper[5034]: E0105 23:30:47.480631 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1caa25-b7ed-49cc-9adf-41a81873b4ea" containerName="mariadb-database-create" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.480651 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1caa25-b7ed-49cc-9adf-41a81873b4ea" containerName="mariadb-database-create" Jan 05 23:30:47 crc kubenswrapper[5034]: E0105 23:30:47.480699 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577a48c3-b291-4924-a70b-ce1ad07ea2b7" containerName="mariadb-account-create-update" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.480707 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="577a48c3-b291-4924-a70b-ce1ad07ea2b7" containerName="mariadb-account-create-update" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.480946 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1caa25-b7ed-49cc-9adf-41a81873b4ea" containerName="mariadb-database-create" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.480982 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="577a48c3-b291-4924-a70b-ce1ad07ea2b7" containerName="mariadb-account-create-update" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.481906 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-sczmb" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.500257 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-sczmb"] Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.593887 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/365e662d-e16c-4f51-af06-055f14000dc6-operator-scripts\") pod \"octavia-persistence-db-create-sczmb\" (UID: \"365e662d-e16c-4f51-af06-055f14000dc6\") " pod="openstack/octavia-persistence-db-create-sczmb" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.593994 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld7sh\" (UniqueName: \"kubernetes.io/projected/365e662d-e16c-4f51-af06-055f14000dc6-kube-api-access-ld7sh\") pod \"octavia-persistence-db-create-sczmb\" (UID: \"365e662d-e16c-4f51-af06-055f14000dc6\") " pod="openstack/octavia-persistence-db-create-sczmb" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.696211 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/365e662d-e16c-4f51-af06-055f14000dc6-operator-scripts\") pod \"octavia-persistence-db-create-sczmb\" (UID: \"365e662d-e16c-4f51-af06-055f14000dc6\") " pod="openstack/octavia-persistence-db-create-sczmb" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.696382 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld7sh\" (UniqueName: \"kubernetes.io/projected/365e662d-e16c-4f51-af06-055f14000dc6-kube-api-access-ld7sh\") pod \"octavia-persistence-db-create-sczmb\" (UID: \"365e662d-e16c-4f51-af06-055f14000dc6\") " pod="openstack/octavia-persistence-db-create-sczmb" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.697984 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/365e662d-e16c-4f51-af06-055f14000dc6-operator-scripts\") pod \"octavia-persistence-db-create-sczmb\" (UID: \"365e662d-e16c-4f51-af06-055f14000dc6\") " pod="openstack/octavia-persistence-db-create-sczmb" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.717991 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld7sh\" (UniqueName: \"kubernetes.io/projected/365e662d-e16c-4f51-af06-055f14000dc6-kube-api-access-ld7sh\") pod \"octavia-persistence-db-create-sczmb\" (UID: \"365e662d-e16c-4f51-af06-055f14000dc6\") " pod="openstack/octavia-persistence-db-create-sczmb" Jan 05 23:30:47 crc kubenswrapper[5034]: I0105 23:30:47.806065 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-sczmb" Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.307591 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-sczmb"] Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.498244 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-ab47-account-create-update-mcb49"] Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.499942 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ab47-account-create-update-mcb49" Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.503739 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.509754 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ab47-account-create-update-mcb49"] Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.630925 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383eea24-e3ab-47ab-9a30-196673dd0ccd-operator-scripts\") pod \"octavia-ab47-account-create-update-mcb49\" (UID: \"383eea24-e3ab-47ab-9a30-196673dd0ccd\") " pod="openstack/octavia-ab47-account-create-update-mcb49" Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.631335 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxlb9\" (UniqueName: \"kubernetes.io/projected/383eea24-e3ab-47ab-9a30-196673dd0ccd-kube-api-access-bxlb9\") pod \"octavia-ab47-account-create-update-mcb49\" (UID: \"383eea24-e3ab-47ab-9a30-196673dd0ccd\") " pod="openstack/octavia-ab47-account-create-update-mcb49" Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.733697 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383eea24-e3ab-47ab-9a30-196673dd0ccd-operator-scripts\") pod \"octavia-ab47-account-create-update-mcb49\" (UID: \"383eea24-e3ab-47ab-9a30-196673dd0ccd\") " pod="openstack/octavia-ab47-account-create-update-mcb49" Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.734094 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxlb9\" (UniqueName: \"kubernetes.io/projected/383eea24-e3ab-47ab-9a30-196673dd0ccd-kube-api-access-bxlb9\") pod \"octavia-ab47-account-create-update-mcb49\" (UID: \"383eea24-e3ab-47ab-9a30-196673dd0ccd\") " pod="openstack/octavia-ab47-account-create-update-mcb49" Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.734597 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383eea24-e3ab-47ab-9a30-196673dd0ccd-operator-scripts\") pod \"octavia-ab47-account-create-update-mcb49\" (UID: \"383eea24-e3ab-47ab-9a30-196673dd0ccd\") " pod="openstack/octavia-ab47-account-create-update-mcb49" Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.753595 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxlb9\" (UniqueName: \"kubernetes.io/projected/383eea24-e3ab-47ab-9a30-196673dd0ccd-kube-api-access-bxlb9\") pod \"octavia-ab47-account-create-update-mcb49\" (UID: \"383eea24-e3ab-47ab-9a30-196673dd0ccd\") " pod="openstack/octavia-ab47-account-create-update-mcb49" Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.835008 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ab47-account-create-update-mcb49" Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.867224 5034 generic.go:334] "Generic (PLEG): container finished" podID="365e662d-e16c-4f51-af06-055f14000dc6" containerID="6954754df6166fc84af6bae312741ca24fc9f16d31798cee2e7ba046d7a5c7dc" exitCode=0 Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.867283 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-sczmb" event={"ID":"365e662d-e16c-4f51-af06-055f14000dc6","Type":"ContainerDied","Data":"6954754df6166fc84af6bae312741ca24fc9f16d31798cee2e7ba046d7a5c7dc"} Jan 05 23:30:48 crc kubenswrapper[5034]: I0105 23:30:48.867324 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-sczmb" event={"ID":"365e662d-e16c-4f51-af06-055f14000dc6","Type":"ContainerStarted","Data":"4a7b015bda4db3cf853fa023dd06b94d9060a0af559a5329d23e31a8ab03f8d7"} Jan 05 23:30:49 crc kubenswrapper[5034]: I0105 23:30:49.356066 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ab47-account-create-update-mcb49"] Jan 05 23:30:49 crc kubenswrapper[5034]: W0105 23:30:49.356941 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod383eea24_e3ab_47ab_9a30_196673dd0ccd.slice/crio-46861563a07ae1c785224090e64f268fda12fbf4d4497776d11e6011b598342e WatchSource:0}: Error finding container 46861563a07ae1c785224090e64f268fda12fbf4d4497776d11e6011b598342e: Status 404 returned error can't find the container with id 46861563a07ae1c785224090e64f268fda12fbf4d4497776d11e6011b598342e Jan 05 23:30:49 crc kubenswrapper[5034]: I0105 23:30:49.838536 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:30:49 crc kubenswrapper[5034]: E0105 23:30:49.839116 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:30:49 crc kubenswrapper[5034]: I0105 23:30:49.877893 5034 generic.go:334] "Generic (PLEG): container finished" podID="383eea24-e3ab-47ab-9a30-196673dd0ccd" containerID="ce0fb3d9058817fa6f0b14f17fa480241bb4fbf31c1ecef233a010ee2feb2392" exitCode=0 Jan 05 23:30:49 crc kubenswrapper[5034]: I0105 23:30:49.877982 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ab47-account-create-update-mcb49" event={"ID":"383eea24-e3ab-47ab-9a30-196673dd0ccd","Type":"ContainerDied","Data":"ce0fb3d9058817fa6f0b14f17fa480241bb4fbf31c1ecef233a010ee2feb2392"} Jan 05 23:30:49 crc kubenswrapper[5034]: I0105 23:30:49.878098 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ab47-account-create-update-mcb49" event={"ID":"383eea24-e3ab-47ab-9a30-196673dd0ccd","Type":"ContainerStarted","Data":"46861563a07ae1c785224090e64f268fda12fbf4d4497776d11e6011b598342e"} Jan 05 23:30:50 crc kubenswrapper[5034]: I0105 23:30:50.233577 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-sczmb" Jan 05 23:30:50 crc kubenswrapper[5034]: I0105 23:30:50.377230 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld7sh\" (UniqueName: \"kubernetes.io/projected/365e662d-e16c-4f51-af06-055f14000dc6-kube-api-access-ld7sh\") pod \"365e662d-e16c-4f51-af06-055f14000dc6\" (UID: \"365e662d-e16c-4f51-af06-055f14000dc6\") " Jan 05 23:30:50 crc kubenswrapper[5034]: I0105 23:30:50.377448 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/365e662d-e16c-4f51-af06-055f14000dc6-operator-scripts\") pod \"365e662d-e16c-4f51-af06-055f14000dc6\" (UID: \"365e662d-e16c-4f51-af06-055f14000dc6\") " Jan 05 23:30:50 crc kubenswrapper[5034]: I0105 23:30:50.378026 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/365e662d-e16c-4f51-af06-055f14000dc6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "365e662d-e16c-4f51-af06-055f14000dc6" (UID: "365e662d-e16c-4f51-af06-055f14000dc6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:30:50 crc kubenswrapper[5034]: I0105 23:30:50.378373 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/365e662d-e16c-4f51-af06-055f14000dc6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:50 crc kubenswrapper[5034]: I0105 23:30:50.382635 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365e662d-e16c-4f51-af06-055f14000dc6-kube-api-access-ld7sh" (OuterVolumeSpecName: "kube-api-access-ld7sh") pod "365e662d-e16c-4f51-af06-055f14000dc6" (UID: "365e662d-e16c-4f51-af06-055f14000dc6"). InnerVolumeSpecName "kube-api-access-ld7sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:30:50 crc kubenswrapper[5034]: I0105 23:30:50.479952 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld7sh\" (UniqueName: \"kubernetes.io/projected/365e662d-e16c-4f51-af06-055f14000dc6-kube-api-access-ld7sh\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:50 crc kubenswrapper[5034]: I0105 23:30:50.889131 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-sczmb" event={"ID":"365e662d-e16c-4f51-af06-055f14000dc6","Type":"ContainerDied","Data":"4a7b015bda4db3cf853fa023dd06b94d9060a0af559a5329d23e31a8ab03f8d7"} Jan 05 23:30:50 crc kubenswrapper[5034]: I0105 23:30:50.889167 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-sczmb" Jan 05 23:30:50 crc kubenswrapper[5034]: I0105 23:30:50.889187 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a7b015bda4db3cf853fa023dd06b94d9060a0af559a5329d23e31a8ab03f8d7" Jan 05 23:30:51 crc kubenswrapper[5034]: I0105 23:30:51.235995 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ab47-account-create-update-mcb49" Jan 05 23:30:51 crc kubenswrapper[5034]: I0105 23:30:51.297279 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383eea24-e3ab-47ab-9a30-196673dd0ccd-operator-scripts\") pod \"383eea24-e3ab-47ab-9a30-196673dd0ccd\" (UID: \"383eea24-e3ab-47ab-9a30-196673dd0ccd\") " Jan 05 23:30:51 crc kubenswrapper[5034]: I0105 23:30:51.297552 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxlb9\" (UniqueName: \"kubernetes.io/projected/383eea24-e3ab-47ab-9a30-196673dd0ccd-kube-api-access-bxlb9\") pod \"383eea24-e3ab-47ab-9a30-196673dd0ccd\" (UID: \"383eea24-e3ab-47ab-9a30-196673dd0ccd\") " Jan 05 23:30:51 crc kubenswrapper[5034]: I0105 23:30:51.298060 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/383eea24-e3ab-47ab-9a30-196673dd0ccd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "383eea24-e3ab-47ab-9a30-196673dd0ccd" (UID: "383eea24-e3ab-47ab-9a30-196673dd0ccd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:30:51 crc kubenswrapper[5034]: I0105 23:30:51.302951 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383eea24-e3ab-47ab-9a30-196673dd0ccd-kube-api-access-bxlb9" (OuterVolumeSpecName: "kube-api-access-bxlb9") pod "383eea24-e3ab-47ab-9a30-196673dd0ccd" (UID: "383eea24-e3ab-47ab-9a30-196673dd0ccd"). InnerVolumeSpecName "kube-api-access-bxlb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:30:51 crc kubenswrapper[5034]: I0105 23:30:51.400721 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxlb9\" (UniqueName: \"kubernetes.io/projected/383eea24-e3ab-47ab-9a30-196673dd0ccd-kube-api-access-bxlb9\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:51 crc kubenswrapper[5034]: I0105 23:30:51.400777 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383eea24-e3ab-47ab-9a30-196673dd0ccd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:30:51 crc kubenswrapper[5034]: I0105 23:30:51.903174 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ab47-account-create-update-mcb49" event={"ID":"383eea24-e3ab-47ab-9a30-196673dd0ccd","Type":"ContainerDied","Data":"46861563a07ae1c785224090e64f268fda12fbf4d4497776d11e6011b598342e"} Jan 05 23:30:51 crc kubenswrapper[5034]: I0105 23:30:51.903219 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ab47-account-create-update-mcb49" Jan 05 23:30:51 crc kubenswrapper[5034]: I0105 23:30:51.903227 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46861563a07ae1c785224090e64f268fda12fbf4d4497776d11e6011b598342e" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.049551 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hlvmj"] Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.062484 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hlvmj"] Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.630453 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-58ccd8cfb7-2fmv2"] Jan 05 23:30:54 crc kubenswrapper[5034]: E0105 23:30:54.631531 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383eea24-e3ab-47ab-9a30-196673dd0ccd" containerName="mariadb-account-create-update" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.636813 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="383eea24-e3ab-47ab-9a30-196673dd0ccd" containerName="mariadb-account-create-update" Jan 05 23:30:54 crc kubenswrapper[5034]: E0105 23:30:54.636952 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365e662d-e16c-4f51-af06-055f14000dc6" containerName="mariadb-database-create" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.637009 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="365e662d-e16c-4f51-af06-055f14000dc6" containerName="mariadb-database-create" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.637483 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="365e662d-e16c-4f51-af06-055f14000dc6" containerName="mariadb-database-create" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.637554 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="383eea24-e3ab-47ab-9a30-196673dd0ccd" containerName="mariadb-account-create-update" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.645136 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.650606 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-ovndbs" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.650832 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.651019 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.651427 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-vjb8d" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.652667 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-58ccd8cfb7-2fmv2"] Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.772186 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-combined-ca-bundle\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.772272 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-ovndb-tls-certs\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.772315 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-scripts\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.772342 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data-merged\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.772370 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-octavia-run\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.772435 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.874651 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.875189 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-combined-ca-bundle\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.875375 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-ovndb-tls-certs\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.875482 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-scripts\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.875569 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data-merged\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.875673 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-octavia-run\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.876480 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-octavia-run\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.876926 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data-merged\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.882173 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-ovndb-tls-certs\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.882491 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-scripts\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.897031 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.899428 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-combined-ca-bundle\") pod \"octavia-api-58ccd8cfb7-2fmv2\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:54 crc kubenswrapper[5034]: I0105 23:30:54.969960 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:30:55 crc kubenswrapper[5034]: I0105 23:30:55.497017 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-58ccd8cfb7-2fmv2"] Jan 05 23:30:55 crc kubenswrapper[5034]: W0105 23:30:55.507832 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb94be02_f0d1_40be_b18d_2a5fa82f7463.slice/crio-1a2bb1c2a8f84bb28a7e42a3eab9511beaab3c11760d337aa962ab042eaf849e WatchSource:0}: Error finding container 1a2bb1c2a8f84bb28a7e42a3eab9511beaab3c11760d337aa962ab042eaf849e: Status 404 returned error can't find the container with id 1a2bb1c2a8f84bb28a7e42a3eab9511beaab3c11760d337aa962ab042eaf849e Jan 05 23:30:55 crc kubenswrapper[5034]: I0105 23:30:55.856051 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91a628a-9407-496c-96fa-25985569f851" path="/var/lib/kubelet/pods/e91a628a-9407-496c-96fa-25985569f851/volumes" Jan 05 23:30:55 crc kubenswrapper[5034]: I0105 23:30:55.943607 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" event={"ID":"bb94be02-f0d1-40be-b18d-2a5fa82f7463","Type":"ContainerStarted","Data":"1a2bb1c2a8f84bb28a7e42a3eab9511beaab3c11760d337aa962ab042eaf849e"} Jan 05 23:31:00 crc kubenswrapper[5034]: I0105 23:31:00.198872 5034 scope.go:117] "RemoveContainer" containerID="50f1fcc01b16dbf9bbff1097ab95070080557e928844c4d854cedb2e00b93fc6" Jan 05 23:31:02 crc kubenswrapper[5034]: I0105 23:31:02.840201 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:31:05 crc kubenswrapper[5034]: I0105 23:31:05.772902 5034 scope.go:117] "RemoveContainer" containerID="8a028b63bd5bc008054daa4235142157039813163f46f1c5e5f864c247a810f5" Jan 05 23:31:05 crc kubenswrapper[5034]: I0105 23:31:05.819136 5034 scope.go:117] "RemoveContainer" containerID="43bda9d75dbee027a1cee3f2555f210514c0173269b3ae446477919b0aecae48" Jan 05 23:31:05 crc kubenswrapper[5034]: I0105 23:31:05.947462 5034 scope.go:117] "RemoveContainer" containerID="8fe8f2d0718d3a1afe9ea0b3d87048d7293976d30bcc51f1617d98261df539cb" Jan 05 23:31:06 crc kubenswrapper[5034]: I0105 23:31:06.014330 5034 scope.go:117] "RemoveContainer" containerID="a017c718f0983b159a48d8593cf86e5d7ecdda530f84f9fe2ecab609b4641785" Jan 05 23:31:07 crc kubenswrapper[5034]: I0105 23:31:07.068596 5034 generic.go:334] "Generic (PLEG): container finished" podID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerID="96ff6fa95a43cdcd063ae7e034591b8b24948b77edbdb43cb370392019008c3e" exitCode=0 Jan 05 23:31:07 crc kubenswrapper[5034]: I0105 23:31:07.068990 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" event={"ID":"bb94be02-f0d1-40be-b18d-2a5fa82f7463","Type":"ContainerDied","Data":"96ff6fa95a43cdcd063ae7e034591b8b24948b77edbdb43cb370392019008c3e"} Jan 05 23:31:07 crc kubenswrapper[5034]: I0105 23:31:07.074700 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"a88a5134ff25bff3380251394560c9cbca0838a1161bcde80ce38bf8d4b764a1"} Jan 05 23:31:08 crc kubenswrapper[5034]: I0105 23:31:08.088440 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" event={"ID":"bb94be02-f0d1-40be-b18d-2a5fa82f7463","Type":"ContainerStarted","Data":"6dd2d1a6497516441d9bc7b0b6588aa32e421a4ca78dac3701bc4921762c0ddf"} Jan 05 23:31:08 crc kubenswrapper[5034]: I0105 23:31:08.089364 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" event={"ID":"bb94be02-f0d1-40be-b18d-2a5fa82f7463","Type":"ContainerStarted","Data":"2b1ceb58125c1a4573a4fc42ee84735e7b74707f82b1365e87a8c301ee510f7e"} Jan 05 23:31:08 crc kubenswrapper[5034]: I0105 23:31:08.089389 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:31:08 crc kubenswrapper[5034]: I0105 23:31:08.089406 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:31:08 crc kubenswrapper[5034]: I0105 23:31:08.127208 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" podStartSLOduration=3.81709137 podStartE2EDuration="14.127171494s" podCreationTimestamp="2026-01-05 23:30:54 +0000 UTC" firstStartedPulling="2026-01-05 23:30:55.510932802 +0000 UTC m=+5947.882932241" lastFinishedPulling="2026-01-05 23:31:05.821012926 +0000 UTC m=+5958.193012365" observedRunningTime="2026-01-05 23:31:08.109645176 +0000 UTC m=+5960.481644615" watchObservedRunningTime="2026-01-05 23:31:08.127171494 +0000 UTC m=+5960.499170943" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.499488 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.504138 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-w6vqp" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.516980 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7kpp9" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.710810 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-w6vqp-config-7hhvv"] Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.712163 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.720238 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.735575 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-w6vqp-config-7hhvv"] Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.846055 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-additional-scripts\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.846442 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vtsf\" (UniqueName: \"kubernetes.io/projected/ea40834d-1fea-4193-81d5-7844d87e07b2-kube-api-access-5vtsf\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.846580 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.846610 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-scripts\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.846644 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-log-ovn\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.846687 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run-ovn\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.949202 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vtsf\" (UniqueName: \"kubernetes.io/projected/ea40834d-1fea-4193-81d5-7844d87e07b2-kube-api-access-5vtsf\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.949379 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.949414 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-scripts\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.949436 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-log-ovn\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.949462 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run-ovn\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.949656 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-additional-scripts\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.950046 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-log-ovn\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.950161 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run-ovn\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.950514 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.951578 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-additional-scripts\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.952169 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-scripts\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:14 crc kubenswrapper[5034]: I0105 23:31:14.980546 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vtsf\" (UniqueName: \"kubernetes.io/projected/ea40834d-1fea-4193-81d5-7844d87e07b2-kube-api-access-5vtsf\") pod \"ovn-controller-w6vqp-config-7hhvv\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:15 crc kubenswrapper[5034]: I0105 23:31:15.054236 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:15 crc kubenswrapper[5034]: I0105 23:31:15.544884 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-w6vqp-config-7hhvv"] Jan 05 23:31:16 crc kubenswrapper[5034]: I0105 23:31:16.167679 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w6vqp-config-7hhvv" event={"ID":"ea40834d-1fea-4193-81d5-7844d87e07b2","Type":"ContainerStarted","Data":"383ced50a2766e5ffb170c33bc8c1079e41f66926989292eb0ae7452fcbdf955"} Jan 05 23:31:16 crc kubenswrapper[5034]: I0105 23:31:16.168333 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w6vqp-config-7hhvv" event={"ID":"ea40834d-1fea-4193-81d5-7844d87e07b2","Type":"ContainerStarted","Data":"cf27684ba9ef9f5e6aa560d033e60131a7f0adb284f8fd0f72cc32577ecc6b87"} Jan 05 23:31:16 crc kubenswrapper[5034]: I0105 23:31:16.198692 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-w6vqp-config-7hhvv" podStartSLOduration=2.198664848 podStartE2EDuration="2.198664848s" podCreationTimestamp="2026-01-05 23:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:31:16.1867568 +0000 UTC m=+5968.558756249" watchObservedRunningTime="2026-01-05 23:31:16.198664848 +0000 UTC m=+5968.570664287" Jan 05 23:31:17 crc kubenswrapper[5034]: I0105 23:31:17.181089 5034 generic.go:334] "Generic (PLEG): container finished" podID="ea40834d-1fea-4193-81d5-7844d87e07b2" containerID="383ced50a2766e5ffb170c33bc8c1079e41f66926989292eb0ae7452fcbdf955" exitCode=0 Jan 05 23:31:17 crc kubenswrapper[5034]: I0105 23:31:17.181378 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w6vqp-config-7hhvv" event={"ID":"ea40834d-1fea-4193-81d5-7844d87e07b2","Type":"ContainerDied","Data":"383ced50a2766e5ffb170c33bc8c1079e41f66926989292eb0ae7452fcbdf955"} Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.721439 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.844036 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run\") pod \"ea40834d-1fea-4193-81d5-7844d87e07b2\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.848170 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run" (OuterVolumeSpecName: "var-run") pod "ea40834d-1fea-4193-81d5-7844d87e07b2" (UID: "ea40834d-1fea-4193-81d5-7844d87e07b2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.848190 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-log-ovn\") pod \"ea40834d-1fea-4193-81d5-7844d87e07b2\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.848270 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run-ovn\") pod \"ea40834d-1fea-4193-81d5-7844d87e07b2\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.848299 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ea40834d-1fea-4193-81d5-7844d87e07b2" (UID: "ea40834d-1fea-4193-81d5-7844d87e07b2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.848414 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ea40834d-1fea-4193-81d5-7844d87e07b2" (UID: "ea40834d-1fea-4193-81d5-7844d87e07b2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.848442 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vtsf\" (UniqueName: \"kubernetes.io/projected/ea40834d-1fea-4193-81d5-7844d87e07b2-kube-api-access-5vtsf\") pod \"ea40834d-1fea-4193-81d5-7844d87e07b2\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.848510 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-additional-scripts\") pod \"ea40834d-1fea-4193-81d5-7844d87e07b2\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.848622 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-scripts\") pod \"ea40834d-1fea-4193-81d5-7844d87e07b2\" (UID: \"ea40834d-1fea-4193-81d5-7844d87e07b2\") " Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.849840 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ea40834d-1fea-4193-81d5-7844d87e07b2" (UID: "ea40834d-1fea-4193-81d5-7844d87e07b2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.850035 5034 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.850057 5034 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.850066 5034 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea40834d-1fea-4193-81d5-7844d87e07b2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.851203 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-scripts" (OuterVolumeSpecName: "scripts") pod "ea40834d-1fea-4193-81d5-7844d87e07b2" (UID: "ea40834d-1fea-4193-81d5-7844d87e07b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.932478 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea40834d-1fea-4193-81d5-7844d87e07b2-kube-api-access-5vtsf" (OuterVolumeSpecName: "kube-api-access-5vtsf") pod "ea40834d-1fea-4193-81d5-7844d87e07b2" (UID: "ea40834d-1fea-4193-81d5-7844d87e07b2"). InnerVolumeSpecName "kube-api-access-5vtsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.954682 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vtsf\" (UniqueName: \"kubernetes.io/projected/ea40834d-1fea-4193-81d5-7844d87e07b2-kube-api-access-5vtsf\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.954732 5034 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:18 crc kubenswrapper[5034]: I0105 23:31:18.954747 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea40834d-1fea-4193-81d5-7844d87e07b2-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.204324 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w6vqp-config-7hhvv" event={"ID":"ea40834d-1fea-4193-81d5-7844d87e07b2","Type":"ContainerDied","Data":"cf27684ba9ef9f5e6aa560d033e60131a7f0adb284f8fd0f72cc32577ecc6b87"} Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.204382 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf27684ba9ef9f5e6aa560d033e60131a7f0adb284f8fd0f72cc32577ecc6b87" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.204493 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w6vqp-config-7hhvv" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.313201 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-w6vqp-config-7hhvv"] Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.326072 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-w6vqp-config-7hhvv"] Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.403655 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-w6vqp-config-4twnd"] Jan 05 23:31:19 crc kubenswrapper[5034]: E0105 23:31:19.404497 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea40834d-1fea-4193-81d5-7844d87e07b2" containerName="ovn-config" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.404516 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea40834d-1fea-4193-81d5-7844d87e07b2" containerName="ovn-config" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.404739 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea40834d-1fea-4193-81d5-7844d87e07b2" containerName="ovn-config" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.407458 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.415959 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.422147 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-w6vqp-config-4twnd"] Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.465103 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-scripts\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.465191 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run-ovn\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.465224 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.465260 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-log-ovn\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.465524 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q94px\" (UniqueName: \"kubernetes.io/projected/ffb62b3f-563a-421d-a020-9a5672058048-kube-api-access-q94px\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.465605 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-additional-scripts\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.568010 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q94px\" (UniqueName: \"kubernetes.io/projected/ffb62b3f-563a-421d-a020-9a5672058048-kube-api-access-q94px\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.568063 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-additional-scripts\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.568188 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-scripts\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.568257 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run-ovn\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.568280 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.568318 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-log-ovn\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.568657 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-log-ovn\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.568695 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run-ovn\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.568664 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.569008 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-additional-scripts\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.570841 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-scripts\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.587985 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q94px\" (UniqueName: \"kubernetes.io/projected/ffb62b3f-563a-421d-a020-9a5672058048-kube-api-access-q94px\") pod \"ovn-controller-w6vqp-config-4twnd\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.734013 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:19 crc kubenswrapper[5034]: I0105 23:31:19.850998 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea40834d-1fea-4193-81d5-7844d87e07b2" path="/var/lib/kubelet/pods/ea40834d-1fea-4193-81d5-7844d87e07b2/volumes" Jan 05 23:31:20 crc kubenswrapper[5034]: I0105 23:31:20.250648 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-w6vqp-config-4twnd"] Jan 05 23:31:21 crc kubenswrapper[5034]: I0105 23:31:21.224663 5034 generic.go:334] "Generic (PLEG): container finished" podID="ffb62b3f-563a-421d-a020-9a5672058048" containerID="4a40f2be015b8dc9807fc47a4472d1954ea222c1ed9f3faa42158be4b4730895" exitCode=0 Jan 05 23:31:21 crc kubenswrapper[5034]: I0105 23:31:21.224790 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w6vqp-config-4twnd" event={"ID":"ffb62b3f-563a-421d-a020-9a5672058048","Type":"ContainerDied","Data":"4a40f2be015b8dc9807fc47a4472d1954ea222c1ed9f3faa42158be4b4730895"} Jan 05 23:31:21 crc kubenswrapper[5034]: I0105 23:31:21.225207 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w6vqp-config-4twnd" event={"ID":"ffb62b3f-563a-421d-a020-9a5672058048","Type":"ContainerStarted","Data":"a2a7d792dbfb81957bb0346e4795c21dcf93a318d9a8c597031a7f3192318b8e"} Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.133481 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-pbtqd"] Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.135867 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.140044 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.140309 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.140475 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.153474 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-pbtqd"] Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.239947 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040f9f95-2d60-448e-b698-041cdd081ec2-config-data\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.240370 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/040f9f95-2d60-448e-b698-041cdd081ec2-hm-ports\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.240402 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040f9f95-2d60-448e-b698-041cdd081ec2-scripts\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.240465 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/040f9f95-2d60-448e-b698-041cdd081ec2-config-data-merged\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.343325 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040f9f95-2d60-448e-b698-041cdd081ec2-config-data\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.343400 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/040f9f95-2d60-448e-b698-041cdd081ec2-hm-ports\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.343424 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040f9f95-2d60-448e-b698-041cdd081ec2-scripts\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.343467 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/040f9f95-2d60-448e-b698-041cdd081ec2-config-data-merged\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.344057 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/040f9f95-2d60-448e-b698-041cdd081ec2-config-data-merged\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.345679 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/040f9f95-2d60-448e-b698-041cdd081ec2-hm-ports\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.367835 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040f9f95-2d60-448e-b698-041cdd081ec2-config-data\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.368101 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040f9f95-2d60-448e-b698-041cdd081ec2-scripts\") pod \"octavia-rsyslog-pbtqd\" (UID: \"040f9f95-2d60-448e-b698-041cdd081ec2\") " pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.457232 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.588201 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.655092 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q94px\" (UniqueName: \"kubernetes.io/projected/ffb62b3f-563a-421d-a020-9a5672058048-kube-api-access-q94px\") pod \"ffb62b3f-563a-421d-a020-9a5672058048\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.655162 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run-ovn\") pod \"ffb62b3f-563a-421d-a020-9a5672058048\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.655225 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-log-ovn\") pod \"ffb62b3f-563a-421d-a020-9a5672058048\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.655290 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-additional-scripts\") pod \"ffb62b3f-563a-421d-a020-9a5672058048\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.655517 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run\") pod \"ffb62b3f-563a-421d-a020-9a5672058048\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.655581 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ffb62b3f-563a-421d-a020-9a5672058048" (UID: "ffb62b3f-563a-421d-a020-9a5672058048"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.655637 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ffb62b3f-563a-421d-a020-9a5672058048" (UID: "ffb62b3f-563a-421d-a020-9a5672058048"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.656386 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-scripts\") pod \"ffb62b3f-563a-421d-a020-9a5672058048\" (UID: \"ffb62b3f-563a-421d-a020-9a5672058048\") " Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.658231 5034 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.658260 5034 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.658204 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run" (OuterVolumeSpecName: "var-run") pod "ffb62b3f-563a-421d-a020-9a5672058048" (UID: "ffb62b3f-563a-421d-a020-9a5672058048"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.659301 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-scripts" (OuterVolumeSpecName: "scripts") pod "ffb62b3f-563a-421d-a020-9a5672058048" (UID: "ffb62b3f-563a-421d-a020-9a5672058048"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.661796 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ffb62b3f-563a-421d-a020-9a5672058048" (UID: "ffb62b3f-563a-421d-a020-9a5672058048"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.673671 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb62b3f-563a-421d-a020-9a5672058048-kube-api-access-q94px" (OuterVolumeSpecName: "kube-api-access-q94px") pod "ffb62b3f-563a-421d-a020-9a5672058048" (UID: "ffb62b3f-563a-421d-a020-9a5672058048"). InnerVolumeSpecName "kube-api-access-q94px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.760362 5034 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ffb62b3f-563a-421d-a020-9a5672058048-var-run\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.760400 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.760413 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q94px\" (UniqueName: \"kubernetes.io/projected/ffb62b3f-563a-421d-a020-9a5672058048-kube-api-access-q94px\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:22 crc kubenswrapper[5034]: I0105 23:31:22.760558 5034 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb62b3f-563a-421d-a020-9a5672058048-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.119682 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-pbtqd"] Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.271948 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w6vqp-config-4twnd" event={"ID":"ffb62b3f-563a-421d-a020-9a5672058048","Type":"ContainerDied","Data":"a2a7d792dbfb81957bb0346e4795c21dcf93a318d9a8c597031a7f3192318b8e"} Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.272012 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2a7d792dbfb81957bb0346e4795c21dcf93a318d9a8c597031a7f3192318b8e" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.272132 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w6vqp-config-4twnd" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.283400 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-pbtqd" event={"ID":"040f9f95-2d60-448e-b698-041cdd081ec2","Type":"ContainerStarted","Data":"0ff604fc08cf5f07b8d7b3c684b5994983613d2ea0a5524a97dc2df1e1c10102"} Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.311013 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-597bd57878-bhq6n"] Jan 05 23:31:23 crc kubenswrapper[5034]: E0105 23:31:23.311566 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb62b3f-563a-421d-a020-9a5672058048" containerName="ovn-config" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.311587 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb62b3f-563a-421d-a020-9a5672058048" containerName="ovn-config" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.311783 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb62b3f-563a-421d-a020-9a5672058048" containerName="ovn-config" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.312888 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-597bd57878-bhq6n" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.320325 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.328365 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-597bd57878-bhq6n"] Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.372972 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eed323dd-5c26-49d2-8108-7dddd6fbb11f-httpd-config\") pod \"octavia-image-upload-597bd57878-bhq6n\" (UID: \"eed323dd-5c26-49d2-8108-7dddd6fbb11f\") " pod="openstack/octavia-image-upload-597bd57878-bhq6n" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.373188 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/eed323dd-5c26-49d2-8108-7dddd6fbb11f-amphora-image\") pod \"octavia-image-upload-597bd57878-bhq6n\" (UID: \"eed323dd-5c26-49d2-8108-7dddd6fbb11f\") " pod="openstack/octavia-image-upload-597bd57878-bhq6n" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.474543 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/eed323dd-5c26-49d2-8108-7dddd6fbb11f-amphora-image\") pod \"octavia-image-upload-597bd57878-bhq6n\" (UID: \"eed323dd-5c26-49d2-8108-7dddd6fbb11f\") " pod="openstack/octavia-image-upload-597bd57878-bhq6n" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.474953 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eed323dd-5c26-49d2-8108-7dddd6fbb11f-httpd-config\") pod \"octavia-image-upload-597bd57878-bhq6n\" (UID: \"eed323dd-5c26-49d2-8108-7dddd6fbb11f\") " pod="openstack/octavia-image-upload-597bd57878-bhq6n" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.475092 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/eed323dd-5c26-49d2-8108-7dddd6fbb11f-amphora-image\") pod \"octavia-image-upload-597bd57878-bhq6n\" (UID: \"eed323dd-5c26-49d2-8108-7dddd6fbb11f\") " pod="openstack/octavia-image-upload-597bd57878-bhq6n" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.479767 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eed323dd-5c26-49d2-8108-7dddd6fbb11f-httpd-config\") pod \"octavia-image-upload-597bd57878-bhq6n\" (UID: \"eed323dd-5c26-49d2-8108-7dddd6fbb11f\") " pod="openstack/octavia-image-upload-597bd57878-bhq6n" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.639508 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-597bd57878-bhq6n" Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.709480 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-w6vqp-config-4twnd"] Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.724681 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-w6vqp-config-4twnd"] Jan 05 23:31:23 crc kubenswrapper[5034]: I0105 23:31:23.859009 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb62b3f-563a-421d-a020-9a5672058048" path="/var/lib/kubelet/pods/ffb62b3f-563a-421d-a020-9a5672058048/volumes" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.165544 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-597bd57878-bhq6n"] Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.315997 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-597bd57878-bhq6n" event={"ID":"eed323dd-5c26-49d2-8108-7dddd6fbb11f","Type":"ContainerStarted","Data":"6a90f15348b97976232f9e949c815443f047afbfb305fb190c017131b7212c76"} Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.539384 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-6d47d78db7-prdg8"] Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.541901 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.544419 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-internal-svc" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.544675 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-public-svc" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.560152 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6d47d78db7-prdg8"] Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.627536 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-combined-ca-bundle\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.627597 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-internal-tls-certs\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.627623 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-config-data\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.627653 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-ovndb-tls-certs\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.627686 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/545f7399-0c28-4bd5-b8ee-dcfbe7511654-config-data-merged\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.627737 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-scripts\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.627764 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-public-tls-certs\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.627832 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/545f7399-0c28-4bd5-b8ee-dcfbe7511654-octavia-run\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.730656 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/545f7399-0c28-4bd5-b8ee-dcfbe7511654-octavia-run\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.730801 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-combined-ca-bundle\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.730830 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-internal-tls-certs\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.730848 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-config-data\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.730895 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-ovndb-tls-certs\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.730924 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/545f7399-0c28-4bd5-b8ee-dcfbe7511654-config-data-merged\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.731403 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-scripts\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.732422 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-public-tls-certs\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.735646 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/545f7399-0c28-4bd5-b8ee-dcfbe7511654-config-data-merged\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.736560 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/545f7399-0c28-4bd5-b8ee-dcfbe7511654-octavia-run\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.740889 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-config-data\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.741173 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-combined-ca-bundle\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.742166 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-internal-tls-certs\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.742715 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-public-tls-certs\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.746766 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-ovndb-tls-certs\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.748250 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/545f7399-0c28-4bd5-b8ee-dcfbe7511654-scripts\") pod \"octavia-api-6d47d78db7-prdg8\" (UID: \"545f7399-0c28-4bd5-b8ee-dcfbe7511654\") " pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:24 crc kubenswrapper[5034]: I0105 23:31:24.885970 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:25 crc kubenswrapper[5034]: I0105 23:31:25.631028 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6d47d78db7-prdg8"] Jan 05 23:31:26 crc kubenswrapper[5034]: W0105 23:31:26.120731 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod545f7399_0c28_4bd5_b8ee_dcfbe7511654.slice/crio-883826595cf7f692083bf5d954cf536f774d1aeea4d4cfcfea77b221842ecf65 WatchSource:0}: Error finding container 883826595cf7f692083bf5d954cf536f774d1aeea4d4cfcfea77b221842ecf65: Status 404 returned error can't find the container with id 883826595cf7f692083bf5d954cf536f774d1aeea4d4cfcfea77b221842ecf65 Jan 05 23:31:26 crc kubenswrapper[5034]: I0105 23:31:26.374596 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-pbtqd" event={"ID":"040f9f95-2d60-448e-b698-041cdd081ec2","Type":"ContainerStarted","Data":"26f5082ea3786baf236b9877fd191059993506920949cfad08aeac29cf4207ee"} Jan 05 23:31:26 crc kubenswrapper[5034]: I0105 23:31:26.388326 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6d47d78db7-prdg8" event={"ID":"545f7399-0c28-4bd5-b8ee-dcfbe7511654","Type":"ContainerStarted","Data":"883826595cf7f692083bf5d954cf536f774d1aeea4d4cfcfea77b221842ecf65"} Jan 05 23:31:27 crc kubenswrapper[5034]: I0105 23:31:27.402172 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6d47d78db7-prdg8" event={"ID":"545f7399-0c28-4bd5-b8ee-dcfbe7511654","Type":"ContainerStarted","Data":"af107646f5c0c7fb8462db1ab5e307517326e211af98e22c4628a8ac692f5b87"} Jan 05 23:31:28 crc kubenswrapper[5034]: I0105 23:31:28.413583 5034 generic.go:334] "Generic (PLEG): container finished" podID="040f9f95-2d60-448e-b698-041cdd081ec2" containerID="26f5082ea3786baf236b9877fd191059993506920949cfad08aeac29cf4207ee" exitCode=0 Jan 05 23:31:28 crc kubenswrapper[5034]: I0105 23:31:28.413690 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-pbtqd" event={"ID":"040f9f95-2d60-448e-b698-041cdd081ec2","Type":"ContainerDied","Data":"26f5082ea3786baf236b9877fd191059993506920949cfad08aeac29cf4207ee"} Jan 05 23:31:28 crc kubenswrapper[5034]: I0105 23:31:28.426730 5034 generic.go:334] "Generic (PLEG): container finished" podID="545f7399-0c28-4bd5-b8ee-dcfbe7511654" containerID="af107646f5c0c7fb8462db1ab5e307517326e211af98e22c4628a8ac692f5b87" exitCode=0 Jan 05 23:31:28 crc kubenswrapper[5034]: I0105 23:31:28.426777 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6d47d78db7-prdg8" event={"ID":"545f7399-0c28-4bd5-b8ee-dcfbe7511654","Type":"ContainerDied","Data":"af107646f5c0c7fb8462db1ab5e307517326e211af98e22c4628a8ac692f5b87"} Jan 05 23:31:30 crc kubenswrapper[5034]: I0105 23:31:30.222225 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:31:30 crc kubenswrapper[5034]: I0105 23:31:30.286053 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:31:30 crc kubenswrapper[5034]: I0105 23:31:30.463524 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6d47d78db7-prdg8" event={"ID":"545f7399-0c28-4bd5-b8ee-dcfbe7511654","Type":"ContainerStarted","Data":"14c2d33738d327302e306f7302d76bae1563b87b1ebfaf0a91aaf91a492cffe4"} Jan 05 23:31:30 crc kubenswrapper[5034]: I0105 23:31:30.463578 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6d47d78db7-prdg8" event={"ID":"545f7399-0c28-4bd5-b8ee-dcfbe7511654","Type":"ContainerStarted","Data":"581f5c5fbe99c9977fd36239907036592ff0218a6cf45abf66494c5328f32796"} Jan 05 23:31:30 crc kubenswrapper[5034]: I0105 23:31:30.463627 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:30 crc kubenswrapper[5034]: I0105 23:31:30.463662 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:30 crc kubenswrapper[5034]: I0105 23:31:30.496887 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-6d47d78db7-prdg8" podStartSLOduration=6.496860532 podStartE2EDuration="6.496860532s" podCreationTimestamp="2026-01-05 23:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:31:30.486991792 +0000 UTC m=+5982.858991241" watchObservedRunningTime="2026-01-05 23:31:30.496860532 +0000 UTC m=+5982.868859971" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.443602 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-xpclr"] Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.446147 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.448655 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.453922 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-xpclr"] Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.591824 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-scripts\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.591888 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data-merged\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.592285 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.592627 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-combined-ca-bundle\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.695387 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.695501 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-combined-ca-bundle\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.695562 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-scripts\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.695597 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data-merged\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.696246 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data-merged\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.702926 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.704859 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-scripts\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.705668 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-combined-ca-bundle\") pod \"octavia-db-sync-xpclr\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:33 crc kubenswrapper[5034]: I0105 23:31:33.766536 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:36 crc kubenswrapper[5034]: W0105 23:31:36.367547 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5df5858b_127b_4f28_8ce3_b0dc938ddaae.slice/crio-46e56919ec5b6dd73602d1aeaa47ba3996d154937a0d3c1d9765b3ac592ddec9 WatchSource:0}: Error finding container 46e56919ec5b6dd73602d1aeaa47ba3996d154937a0d3c1d9765b3ac592ddec9: Status 404 returned error can't find the container with id 46e56919ec5b6dd73602d1aeaa47ba3996d154937a0d3c1d9765b3ac592ddec9 Jan 05 23:31:36 crc kubenswrapper[5034]: I0105 23:31:36.369870 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-xpclr"] Jan 05 23:31:36 crc kubenswrapper[5034]: I0105 23:31:36.524103 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-597bd57878-bhq6n" event={"ID":"eed323dd-5c26-49d2-8108-7dddd6fbb11f","Type":"ContainerStarted","Data":"dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c"} Jan 05 23:31:36 crc kubenswrapper[5034]: I0105 23:31:36.527562 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-xpclr" event={"ID":"5df5858b-127b-4f28-8ce3-b0dc938ddaae","Type":"ContainerStarted","Data":"0db83b64e78f4d4954ba4f6bf548977c93dec6604e6f6187c80f7c4c4bf0061a"} Jan 05 23:31:36 crc kubenswrapper[5034]: I0105 23:31:36.527627 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-xpclr" event={"ID":"5df5858b-127b-4f28-8ce3-b0dc938ddaae","Type":"ContainerStarted","Data":"46e56919ec5b6dd73602d1aeaa47ba3996d154937a0d3c1d9765b3ac592ddec9"} Jan 05 23:31:36 crc kubenswrapper[5034]: I0105 23:31:36.531976 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-pbtqd" event={"ID":"040f9f95-2d60-448e-b698-041cdd081ec2","Type":"ContainerStarted","Data":"e61451971ebe838607b05764f20d66cae1a165c8c3c552c4ec5fff0a525b6dbf"} Jan 05 23:31:36 crc kubenswrapper[5034]: I0105 23:31:36.532853 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:31:36 crc kubenswrapper[5034]: I0105 23:31:36.585276 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-pbtqd" podStartSLOduration=1.7968927909999999 podStartE2EDuration="14.585241398s" podCreationTimestamp="2026-01-05 23:31:22 +0000 UTC" firstStartedPulling="2026-01-05 23:31:23.115780471 +0000 UTC m=+5975.487779910" lastFinishedPulling="2026-01-05 23:31:35.904129078 +0000 UTC m=+5988.276128517" observedRunningTime="2026-01-05 23:31:36.56947866 +0000 UTC m=+5988.941478089" watchObservedRunningTime="2026-01-05 23:31:36.585241398 +0000 UTC m=+5988.957240837" Jan 05 23:31:37 crc kubenswrapper[5034]: I0105 23:31:37.550602 5034 generic.go:334] "Generic (PLEG): container finished" podID="5df5858b-127b-4f28-8ce3-b0dc938ddaae" containerID="0db83b64e78f4d4954ba4f6bf548977c93dec6604e6f6187c80f7c4c4bf0061a" exitCode=0 Jan 05 23:31:37 crc kubenswrapper[5034]: I0105 23:31:37.551033 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-xpclr" event={"ID":"5df5858b-127b-4f28-8ce3-b0dc938ddaae","Type":"ContainerDied","Data":"0db83b64e78f4d4954ba4f6bf548977c93dec6604e6f6187c80f7c4c4bf0061a"} Jan 05 23:31:37 crc kubenswrapper[5034]: I0105 23:31:37.553417 5034 generic.go:334] "Generic (PLEG): container finished" podID="eed323dd-5c26-49d2-8108-7dddd6fbb11f" containerID="dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c" exitCode=0 Jan 05 23:31:37 crc kubenswrapper[5034]: I0105 23:31:37.553573 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-597bd57878-bhq6n" event={"ID":"eed323dd-5c26-49d2-8108-7dddd6fbb11f","Type":"ContainerDied","Data":"dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c"} Jan 05 23:31:38 crc kubenswrapper[5034]: I0105 23:31:38.580448 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-xpclr" event={"ID":"5df5858b-127b-4f28-8ce3-b0dc938ddaae","Type":"ContainerStarted","Data":"8232c0ad270594f826771803f4dcc4867043171c9d9bc5ede80c9b5c70174dce"} Jan 05 23:31:38 crc kubenswrapper[5034]: I0105 23:31:38.589159 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-597bd57878-bhq6n" event={"ID":"eed323dd-5c26-49d2-8108-7dddd6fbb11f","Type":"ContainerStarted","Data":"646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b"} Jan 05 23:31:38 crc kubenswrapper[5034]: I0105 23:31:38.605809 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-xpclr" podStartSLOduration=5.605785269 podStartE2EDuration="5.605785269s" podCreationTimestamp="2026-01-05 23:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:31:38.60086957 +0000 UTC m=+5990.972869009" watchObservedRunningTime="2026-01-05 23:31:38.605785269 +0000 UTC m=+5990.977784708" Jan 05 23:31:44 crc kubenswrapper[5034]: I0105 23:31:44.162278 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:44 crc kubenswrapper[5034]: I0105 23:31:44.178277 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6d47d78db7-prdg8" Jan 05 23:31:44 crc kubenswrapper[5034]: I0105 23:31:44.200177 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-597bd57878-bhq6n" podStartSLOduration=9.32944363 podStartE2EDuration="21.200155904s" podCreationTimestamp="2026-01-05 23:31:23 +0000 UTC" firstStartedPulling="2026-01-05 23:31:24.172493121 +0000 UTC m=+5976.544492560" lastFinishedPulling="2026-01-05 23:31:36.043205395 +0000 UTC m=+5988.415204834" observedRunningTime="2026-01-05 23:31:38.630445869 +0000 UTC m=+5991.002445308" watchObservedRunningTime="2026-01-05 23:31:44.200155904 +0000 UTC m=+5996.572155343" Jan 05 23:31:44 crc kubenswrapper[5034]: I0105 23:31:44.286246 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-58ccd8cfb7-2fmv2"] Jan 05 23:31:44 crc kubenswrapper[5034]: I0105 23:31:44.286516 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerName="octavia-api" containerID="cri-o://2b1ceb58125c1a4573a4fc42ee84735e7b74707f82b1365e87a8c301ee510f7e" gracePeriod=30 Jan 05 23:31:44 crc kubenswrapper[5034]: I0105 23:31:44.286605 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerName="octavia-api-provider-agent" containerID="cri-o://6dd2d1a6497516441d9bc7b0b6588aa32e421a4ca78dac3701bc4921762c0ddf" gracePeriod=30 Jan 05 23:31:44 crc kubenswrapper[5034]: I0105 23:31:44.652953 5034 generic.go:334] "Generic (PLEG): container finished" podID="5df5858b-127b-4f28-8ce3-b0dc938ddaae" containerID="8232c0ad270594f826771803f4dcc4867043171c9d9bc5ede80c9b5c70174dce" exitCode=0 Jan 05 23:31:44 crc kubenswrapper[5034]: I0105 23:31:44.653189 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-xpclr" event={"ID":"5df5858b-127b-4f28-8ce3-b0dc938ddaae","Type":"ContainerDied","Data":"8232c0ad270594f826771803f4dcc4867043171c9d9bc5ede80c9b5c70174dce"} Jan 05 23:31:45 crc kubenswrapper[5034]: I0105 23:31:45.667438 5034 generic.go:334] "Generic (PLEG): container finished" podID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerID="6dd2d1a6497516441d9bc7b0b6588aa32e421a4ca78dac3701bc4921762c0ddf" exitCode=0 Jan 05 23:31:45 crc kubenswrapper[5034]: I0105 23:31:45.667498 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" event={"ID":"bb94be02-f0d1-40be-b18d-2a5fa82f7463","Type":"ContainerDied","Data":"6dd2d1a6497516441d9bc7b0b6588aa32e421a4ca78dac3701bc4921762c0ddf"} Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.118514 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.149390 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data\") pod \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.149470 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-scripts\") pod \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.149551 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data-merged\") pod \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.149638 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-combined-ca-bundle\") pod \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\" (UID: \"5df5858b-127b-4f28-8ce3-b0dc938ddaae\") " Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.162103 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-scripts" (OuterVolumeSpecName: "scripts") pod "5df5858b-127b-4f28-8ce3-b0dc938ddaae" (UID: "5df5858b-127b-4f28-8ce3-b0dc938ddaae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.174657 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data" (OuterVolumeSpecName: "config-data") pod "5df5858b-127b-4f28-8ce3-b0dc938ddaae" (UID: "5df5858b-127b-4f28-8ce3-b0dc938ddaae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.187537 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5df5858b-127b-4f28-8ce3-b0dc938ddaae" (UID: "5df5858b-127b-4f28-8ce3-b0dc938ddaae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.204694 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "5df5858b-127b-4f28-8ce3-b0dc938ddaae" (UID: "5df5858b-127b-4f28-8ce3-b0dc938ddaae"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.256765 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.256808 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.256819 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5df5858b-127b-4f28-8ce3-b0dc938ddaae-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.256831 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df5858b-127b-4f28-8ce3-b0dc938ddaae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.680556 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-xpclr" event={"ID":"5df5858b-127b-4f28-8ce3-b0dc938ddaae","Type":"ContainerDied","Data":"46e56919ec5b6dd73602d1aeaa47ba3996d154937a0d3c1d9765b3ac592ddec9"} Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.680607 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46e56919ec5b6dd73602d1aeaa47ba3996d154937a0d3c1d9765b3ac592ddec9" Jan 05 23:31:46 crc kubenswrapper[5034]: I0105 23:31:46.680781 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-xpclr" Jan 05 23:31:47 crc kubenswrapper[5034]: I0105 23:31:47.699413 5034 generic.go:334] "Generic (PLEG): container finished" podID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerID="2b1ceb58125c1a4573a4fc42ee84735e7b74707f82b1365e87a8c301ee510f7e" exitCode=0 Jan 05 23:31:47 crc kubenswrapper[5034]: I0105 23:31:47.699467 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" event={"ID":"bb94be02-f0d1-40be-b18d-2a5fa82f7463","Type":"ContainerDied","Data":"2b1ceb58125c1a4573a4fc42ee84735e7b74707f82b1365e87a8c301ee510f7e"} Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.070573 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.101291 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data\") pod \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.101418 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-combined-ca-bundle\") pod \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.101546 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data-merged\") pod \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.101591 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-scripts\") pod \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.101634 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-octavia-run\") pod \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.101763 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-ovndb-tls-certs\") pod \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\" (UID: \"bb94be02-f0d1-40be-b18d-2a5fa82f7463\") " Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.101997 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-octavia-run" (OuterVolumeSpecName: "octavia-run") pod "bb94be02-f0d1-40be-b18d-2a5fa82f7463" (UID: "bb94be02-f0d1-40be-b18d-2a5fa82f7463"). InnerVolumeSpecName "octavia-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.102242 5034 reconciler_common.go:293] "Volume detached for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-octavia-run\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.123272 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-scripts" (OuterVolumeSpecName: "scripts") pod "bb94be02-f0d1-40be-b18d-2a5fa82f7463" (UID: "bb94be02-f0d1-40be-b18d-2a5fa82f7463"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.132254 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data" (OuterVolumeSpecName: "config-data") pod "bb94be02-f0d1-40be-b18d-2a5fa82f7463" (UID: "bb94be02-f0d1-40be-b18d-2a5fa82f7463"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.207121 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.207154 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.240727 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "bb94be02-f0d1-40be-b18d-2a5fa82f7463" (UID: "bb94be02-f0d1-40be-b18d-2a5fa82f7463"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.240883 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb94be02-f0d1-40be-b18d-2a5fa82f7463" (UID: "bb94be02-f0d1-40be-b18d-2a5fa82f7463"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.309475 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.309534 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bb94be02-f0d1-40be-b18d-2a5fa82f7463-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.320109 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bb94be02-f0d1-40be-b18d-2a5fa82f7463" (UID: "bb94be02-f0d1-40be-b18d-2a5fa82f7463"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.411716 5034 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb94be02-f0d1-40be-b18d-2a5fa82f7463-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.720550 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" event={"ID":"bb94be02-f0d1-40be-b18d-2a5fa82f7463","Type":"ContainerDied","Data":"1a2bb1c2a8f84bb28a7e42a3eab9511beaab3c11760d337aa962ab042eaf849e"} Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.721005 5034 scope.go:117] "RemoveContainer" containerID="6dd2d1a6497516441d9bc7b0b6588aa32e421a4ca78dac3701bc4921762c0ddf" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.720616 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-58ccd8cfb7-2fmv2" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.765451 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-58ccd8cfb7-2fmv2"] Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.770138 5034 scope.go:117] "RemoveContainer" containerID="2b1ceb58125c1a4573a4fc42ee84735e7b74707f82b1365e87a8c301ee510f7e" Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.775623 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-api-58ccd8cfb7-2fmv2"] Jan 05 23:31:48 crc kubenswrapper[5034]: I0105 23:31:48.792692 5034 scope.go:117] "RemoveContainer" containerID="96ff6fa95a43cdcd063ae7e034591b8b24948b77edbdb43cb370392019008c3e" Jan 05 23:31:49 crc kubenswrapper[5034]: I0105 23:31:49.854155 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" path="/var/lib/kubelet/pods/bb94be02-f0d1-40be-b18d-2a5fa82f7463/volumes" Jan 05 23:31:52 crc kubenswrapper[5034]: I0105 23:31:52.488855 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-pbtqd" Jan 05 23:32:17 crc kubenswrapper[5034]: I0105 23:32:17.993414 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-597bd57878-bhq6n"] Jan 05 23:32:17 crc kubenswrapper[5034]: I0105 23:32:17.995166 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-597bd57878-bhq6n" podUID="eed323dd-5c26-49d2-8108-7dddd6fbb11f" containerName="octavia-amphora-httpd" containerID="cri-o://646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b" gracePeriod=30 Jan 05 23:32:18 crc kubenswrapper[5034]: I0105 23:32:18.633613 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-597bd57878-bhq6n" Jan 05 23:32:18 crc kubenswrapper[5034]: I0105 23:32:18.769835 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eed323dd-5c26-49d2-8108-7dddd6fbb11f-httpd-config\") pod \"eed323dd-5c26-49d2-8108-7dddd6fbb11f\" (UID: \"eed323dd-5c26-49d2-8108-7dddd6fbb11f\") " Jan 05 23:32:18 crc kubenswrapper[5034]: I0105 23:32:18.770047 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/eed323dd-5c26-49d2-8108-7dddd6fbb11f-amphora-image\") pod \"eed323dd-5c26-49d2-8108-7dddd6fbb11f\" (UID: \"eed323dd-5c26-49d2-8108-7dddd6fbb11f\") " Jan 05 23:32:18 crc kubenswrapper[5034]: I0105 23:32:18.798637 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed323dd-5c26-49d2-8108-7dddd6fbb11f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "eed323dd-5c26-49d2-8108-7dddd6fbb11f" (UID: "eed323dd-5c26-49d2-8108-7dddd6fbb11f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:32:18 crc kubenswrapper[5034]: I0105 23:32:18.841734 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eed323dd-5c26-49d2-8108-7dddd6fbb11f-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "eed323dd-5c26-49d2-8108-7dddd6fbb11f" (UID: "eed323dd-5c26-49d2-8108-7dddd6fbb11f"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:32:18 crc kubenswrapper[5034]: I0105 23:32:18.873030 5034 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/eed323dd-5c26-49d2-8108-7dddd6fbb11f-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 05 23:32:18 crc kubenswrapper[5034]: I0105 23:32:18.873069 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eed323dd-5c26-49d2-8108-7dddd6fbb11f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.005621 5034 generic.go:334] "Generic (PLEG): container finished" podID="eed323dd-5c26-49d2-8108-7dddd6fbb11f" containerID="646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b" exitCode=0 Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.005681 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-597bd57878-bhq6n" event={"ID":"eed323dd-5c26-49d2-8108-7dddd6fbb11f","Type":"ContainerDied","Data":"646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b"} Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.005710 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-597bd57878-bhq6n" Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.005728 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-597bd57878-bhq6n" event={"ID":"eed323dd-5c26-49d2-8108-7dddd6fbb11f","Type":"ContainerDied","Data":"6a90f15348b97976232f9e949c815443f047afbfb305fb190c017131b7212c76"} Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.005754 5034 scope.go:117] "RemoveContainer" containerID="646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b" Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.033225 5034 scope.go:117] "RemoveContainer" containerID="dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c" Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.048454 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-597bd57878-bhq6n"] Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.058417 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-597bd57878-bhq6n"] Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.072948 5034 scope.go:117] "RemoveContainer" containerID="646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b" Jan 05 23:32:19 crc kubenswrapper[5034]: E0105 23:32:19.073645 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b\": container with ID starting with 646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b not found: ID does not exist" containerID="646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b" Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.073688 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b"} err="failed to get container status \"646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b\": rpc error: code = NotFound desc = could not find container \"646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b\": container with ID starting with 646440fbff1800f5827c19297b3c0e932e33231de369eefd910049105329f83b not found: ID does not exist" Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.073711 5034 scope.go:117] "RemoveContainer" containerID="dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c" Jan 05 23:32:19 crc kubenswrapper[5034]: E0105 23:32:19.074103 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c\": container with ID starting with dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c not found: ID does not exist" containerID="dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c" Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.074167 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c"} err="failed to get container status \"dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c\": rpc error: code = NotFound desc = could not find container \"dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c\": container with ID starting with dbb4266089f79a9a6fae4dd42f509f011a0c5f1db771f5afb67f3f5ee079bd2c not found: ID does not exist" Jan 05 23:32:19 crc kubenswrapper[5034]: I0105 23:32:19.849843 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eed323dd-5c26-49d2-8108-7dddd6fbb11f" path="/var/lib/kubelet/pods/eed323dd-5c26-49d2-8108-7dddd6fbb11f/volumes" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.244768 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-597bd57878-qnf52"] Jan 05 23:32:24 crc kubenswrapper[5034]: E0105 23:32:24.245858 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df5858b-127b-4f28-8ce3-b0dc938ddaae" containerName="octavia-db-sync" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.245877 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df5858b-127b-4f28-8ce3-b0dc938ddaae" containerName="octavia-db-sync" Jan 05 23:32:24 crc kubenswrapper[5034]: E0105 23:32:24.245902 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed323dd-5c26-49d2-8108-7dddd6fbb11f" containerName="init" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.245910 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed323dd-5c26-49d2-8108-7dddd6fbb11f" containerName="init" Jan 05 23:32:24 crc kubenswrapper[5034]: E0105 23:32:24.245928 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed323dd-5c26-49d2-8108-7dddd6fbb11f" containerName="octavia-amphora-httpd" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.245939 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed323dd-5c26-49d2-8108-7dddd6fbb11f" containerName="octavia-amphora-httpd" Jan 05 23:32:24 crc kubenswrapper[5034]: E0105 23:32:24.245952 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerName="init" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.245960 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerName="init" Jan 05 23:32:24 crc kubenswrapper[5034]: E0105 23:32:24.245969 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df5858b-127b-4f28-8ce3-b0dc938ddaae" containerName="init" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.245979 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df5858b-127b-4f28-8ce3-b0dc938ddaae" containerName="init" Jan 05 23:32:24 crc kubenswrapper[5034]: E0105 23:32:24.246012 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerName="octavia-api" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.246020 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerName="octavia-api" Jan 05 23:32:24 crc kubenswrapper[5034]: E0105 23:32:24.246045 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerName="octavia-api-provider-agent" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.246053 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerName="octavia-api-provider-agent" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.246301 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerName="octavia-api" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.246329 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df5858b-127b-4f28-8ce3-b0dc938ddaae" containerName="octavia-db-sync" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.246350 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb94be02-f0d1-40be-b18d-2a5fa82f7463" containerName="octavia-api-provider-agent" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.246365 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="eed323dd-5c26-49d2-8108-7dddd6fbb11f" containerName="octavia-amphora-httpd" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.247721 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-597bd57878-qnf52" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.250948 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.266522 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-597bd57878-qnf52"] Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.408981 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/083c512f-4112-4de4-a1a6-7ee6463e36bf-httpd-config\") pod \"octavia-image-upload-597bd57878-qnf52\" (UID: \"083c512f-4112-4de4-a1a6-7ee6463e36bf\") " pod="openstack/octavia-image-upload-597bd57878-qnf52" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.409151 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/083c512f-4112-4de4-a1a6-7ee6463e36bf-amphora-image\") pod \"octavia-image-upload-597bd57878-qnf52\" (UID: \"083c512f-4112-4de4-a1a6-7ee6463e36bf\") " pod="openstack/octavia-image-upload-597bd57878-qnf52" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.511503 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/083c512f-4112-4de4-a1a6-7ee6463e36bf-amphora-image\") pod \"octavia-image-upload-597bd57878-qnf52\" (UID: \"083c512f-4112-4de4-a1a6-7ee6463e36bf\") " pod="openstack/octavia-image-upload-597bd57878-qnf52" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.511656 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/083c512f-4112-4de4-a1a6-7ee6463e36bf-httpd-config\") pod \"octavia-image-upload-597bd57878-qnf52\" (UID: \"083c512f-4112-4de4-a1a6-7ee6463e36bf\") " pod="openstack/octavia-image-upload-597bd57878-qnf52" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.512130 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/083c512f-4112-4de4-a1a6-7ee6463e36bf-amphora-image\") pod \"octavia-image-upload-597bd57878-qnf52\" (UID: \"083c512f-4112-4de4-a1a6-7ee6463e36bf\") " pod="openstack/octavia-image-upload-597bd57878-qnf52" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.526501 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/083c512f-4112-4de4-a1a6-7ee6463e36bf-httpd-config\") pod \"octavia-image-upload-597bd57878-qnf52\" (UID: \"083c512f-4112-4de4-a1a6-7ee6463e36bf\") " pod="openstack/octavia-image-upload-597bd57878-qnf52" Jan 05 23:32:24 crc kubenswrapper[5034]: I0105 23:32:24.592415 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-597bd57878-qnf52" Jan 05 23:32:25 crc kubenswrapper[5034]: I0105 23:32:25.033232 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-597bd57878-qnf52"] Jan 05 23:32:25 crc kubenswrapper[5034]: I0105 23:32:25.037055 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 23:32:25 crc kubenswrapper[5034]: I0105 23:32:25.082490 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-597bd57878-qnf52" event={"ID":"083c512f-4112-4de4-a1a6-7ee6463e36bf","Type":"ContainerStarted","Data":"c488c069b2b71932fffbdffbc8046a04d22deace7721872d2bec3bff032ae918"} Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.063382 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-5jdkb"] Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.065556 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.070656 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.070789 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.071530 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.085785 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-5jdkb"] Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.104336 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-597bd57878-qnf52" event={"ID":"083c512f-4112-4de4-a1a6-7ee6463e36bf","Type":"ContainerStarted","Data":"72aaaf030dfc4574a97f6c8500f449d9fb4942fe7005b0608bde1bcd68d5b285"} Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.158324 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-amphora-certs\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.158425 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-combined-ca-bundle\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.158500 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7a44d744-4036-49a9-ba5d-dc55a15b65e8-hm-ports\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.158563 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-scripts\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.158622 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-config-data\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.158698 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7a44d744-4036-49a9-ba5d-dc55a15b65e8-config-data-merged\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.259679 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7a44d744-4036-49a9-ba5d-dc55a15b65e8-hm-ports\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.259764 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-scripts\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.259822 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-config-data\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.259894 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7a44d744-4036-49a9-ba5d-dc55a15b65e8-config-data-merged\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.259957 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-amphora-certs\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.260007 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-combined-ca-bundle\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.261713 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7a44d744-4036-49a9-ba5d-dc55a15b65e8-config-data-merged\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.261996 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7a44d744-4036-49a9-ba5d-dc55a15b65e8-hm-ports\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.266059 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-scripts\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.266541 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-config-data\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.266732 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-combined-ca-bundle\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.266734 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7a44d744-4036-49a9-ba5d-dc55a15b65e8-amphora-certs\") pod \"octavia-healthmanager-5jdkb\" (UID: \"7a44d744-4036-49a9-ba5d-dc55a15b65e8\") " pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:26 crc kubenswrapper[5034]: I0105 23:32:26.386788 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:27 crc kubenswrapper[5034]: I0105 23:32:27.077884 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-5jdkb"] Jan 05 23:32:27 crc kubenswrapper[5034]: I0105 23:32:27.114297 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-5jdkb" event={"ID":"7a44d744-4036-49a9-ba5d-dc55a15b65e8","Type":"ContainerStarted","Data":"012fa8cbc94051f0fcc8ce720f31a6436173d86425945986e0c965dbed01d0bd"} Jan 05 23:32:27 crc kubenswrapper[5034]: I0105 23:32:27.116120 5034 generic.go:334] "Generic (PLEG): container finished" podID="083c512f-4112-4de4-a1a6-7ee6463e36bf" containerID="72aaaf030dfc4574a97f6c8500f449d9fb4942fe7005b0608bde1bcd68d5b285" exitCode=0 Jan 05 23:32:27 crc kubenswrapper[5034]: I0105 23:32:27.116179 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-597bd57878-qnf52" event={"ID":"083c512f-4112-4de4-a1a6-7ee6463e36bf","Type":"ContainerDied","Data":"72aaaf030dfc4574a97f6c8500f449d9fb4942fe7005b0608bde1bcd68d5b285"} Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.004447 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-2tndz"] Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.007246 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.010294 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.018676 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-2tndz"] Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.020562 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.096657 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-combined-ca-bundle\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.096829 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/4eb82084-cf41-47cb-96b1-0824f002f49a-hm-ports\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.096864 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4eb82084-cf41-47cb-96b1-0824f002f49a-config-data-merged\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.096913 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-config-data\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.096960 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-amphora-certs\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.097007 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-scripts\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.134442 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-597bd57878-qnf52" event={"ID":"083c512f-4112-4de4-a1a6-7ee6463e36bf","Type":"ContainerStarted","Data":"0e0317ec23b244e60d08341583ac55cb11af9409310c4034fec9800a25ee240a"} Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.136936 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-5jdkb" event={"ID":"7a44d744-4036-49a9-ba5d-dc55a15b65e8","Type":"ContainerStarted","Data":"e5f32427cf83e15c4ea162b84940e3fa05fbaec2a2e0ccc9b60ffa154709a48b"} Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.158246 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-597bd57878-qnf52" podStartSLOduration=3.650136981 podStartE2EDuration="4.15821386s" podCreationTimestamp="2026-01-05 23:32:24 +0000 UTC" firstStartedPulling="2026-01-05 23:32:25.036832047 +0000 UTC m=+6037.408831486" lastFinishedPulling="2026-01-05 23:32:25.544908926 +0000 UTC m=+6037.916908365" observedRunningTime="2026-01-05 23:32:28.151846869 +0000 UTC m=+6040.523846308" watchObservedRunningTime="2026-01-05 23:32:28.15821386 +0000 UTC m=+6040.530213299" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.203385 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/4eb82084-cf41-47cb-96b1-0824f002f49a-hm-ports\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.203834 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4eb82084-cf41-47cb-96b1-0824f002f49a-config-data-merged\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.204003 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-config-data\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.204184 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-amphora-certs\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.204390 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-scripts\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.204535 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4eb82084-cf41-47cb-96b1-0824f002f49a-config-data-merged\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.204654 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-combined-ca-bundle\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.204675 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/4eb82084-cf41-47cb-96b1-0824f002f49a-hm-ports\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.214442 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-config-data\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.220758 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-amphora-certs\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.221027 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-combined-ca-bundle\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.226124 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eb82084-cf41-47cb-96b1-0824f002f49a-scripts\") pod \"octavia-housekeeping-2tndz\" (UID: \"4eb82084-cf41-47cb-96b1-0824f002f49a\") " pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.325360 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:28 crc kubenswrapper[5034]: I0105 23:32:28.915256 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-2tndz"] Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.149201 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-2tndz" event={"ID":"4eb82084-cf41-47cb-96b1-0824f002f49a","Type":"ContainerStarted","Data":"b837db775d52807fd2b279a6b4e220a7f2b89cd0353dc38dec3bee6c75ccfb26"} Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.674740 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-vfjct"] Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.677580 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.681695 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.681974 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.692263 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-vfjct"] Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.849694 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/44398740-2fc7-4264-9614-fc0a5fe8e35e-config-data-merged\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.849773 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-combined-ca-bundle\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.849956 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-scripts\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.850663 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-amphora-certs\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.850784 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-config-data\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.850832 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/44398740-2fc7-4264-9614-fc0a5fe8e35e-hm-ports\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.953309 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-amphora-certs\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.953376 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-config-data\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.953398 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/44398740-2fc7-4264-9614-fc0a5fe8e35e-hm-ports\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.953433 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/44398740-2fc7-4264-9614-fc0a5fe8e35e-config-data-merged\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.953527 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-combined-ca-bundle\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.953589 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-scripts\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.954952 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/44398740-2fc7-4264-9614-fc0a5fe8e35e-config-data-merged\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.955636 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/44398740-2fc7-4264-9614-fc0a5fe8e35e-hm-ports\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.960309 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-amphora-certs\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.960884 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-scripts\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.961091 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-combined-ca-bundle\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:29 crc kubenswrapper[5034]: I0105 23:32:29.963922 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44398740-2fc7-4264-9614-fc0a5fe8e35e-config-data\") pod \"octavia-worker-vfjct\" (UID: \"44398740-2fc7-4264-9614-fc0a5fe8e35e\") " pod="openstack/octavia-worker-vfjct" Jan 05 23:32:30 crc kubenswrapper[5034]: I0105 23:32:30.004709 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-vfjct" Jan 05 23:32:30 crc kubenswrapper[5034]: I0105 23:32:30.158851 5034 generic.go:334] "Generic (PLEG): container finished" podID="7a44d744-4036-49a9-ba5d-dc55a15b65e8" containerID="e5f32427cf83e15c4ea162b84940e3fa05fbaec2a2e0ccc9b60ffa154709a48b" exitCode=0 Jan 05 23:32:30 crc kubenswrapper[5034]: I0105 23:32:30.158913 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-5jdkb" event={"ID":"7a44d744-4036-49a9-ba5d-dc55a15b65e8","Type":"ContainerDied","Data":"e5f32427cf83e15c4ea162b84940e3fa05fbaec2a2e0ccc9b60ffa154709a48b"} Jan 05 23:32:30 crc kubenswrapper[5034]: I0105 23:32:30.845243 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-vfjct"] Jan 05 23:32:31 crc kubenswrapper[5034]: I0105 23:32:31.184776 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vfjct" event={"ID":"44398740-2fc7-4264-9614-fc0a5fe8e35e","Type":"ContainerStarted","Data":"d7f0326e20cce5ec978d2ff3208cec9a32e4209aafec970334c3cf4962c10544"} Jan 05 23:32:31 crc kubenswrapper[5034]: I0105 23:32:31.187318 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-2tndz" event={"ID":"4eb82084-cf41-47cb-96b1-0824f002f49a","Type":"ContainerStarted","Data":"6600cea55f9f1a5fce4da8b508d9658598e5dad8ab3d45ae9ce4b329124bc172"} Jan 05 23:32:31 crc kubenswrapper[5034]: I0105 23:32:31.192288 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-5jdkb" event={"ID":"7a44d744-4036-49a9-ba5d-dc55a15b65e8","Type":"ContainerStarted","Data":"d5933ff5c180d720777af076152765440ca8c28efe180f67dde881bddbfa848e"} Jan 05 23:32:31 crc kubenswrapper[5034]: I0105 23:32:31.195633 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:31 crc kubenswrapper[5034]: I0105 23:32:31.236521 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-5jdkb" podStartSLOduration=5.23649555 podStartE2EDuration="5.23649555s" podCreationTimestamp="2026-01-05 23:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:32:31.230820689 +0000 UTC m=+6043.602820138" watchObservedRunningTime="2026-01-05 23:32:31.23649555 +0000 UTC m=+6043.608495009" Jan 05 23:32:33 crc kubenswrapper[5034]: I0105 23:32:33.217789 5034 generic.go:334] "Generic (PLEG): container finished" podID="4eb82084-cf41-47cb-96b1-0824f002f49a" containerID="6600cea55f9f1a5fce4da8b508d9658598e5dad8ab3d45ae9ce4b329124bc172" exitCode=0 Jan 05 23:32:33 crc kubenswrapper[5034]: I0105 23:32:33.217886 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-2tndz" event={"ID":"4eb82084-cf41-47cb-96b1-0824f002f49a","Type":"ContainerDied","Data":"6600cea55f9f1a5fce4da8b508d9658598e5dad8ab3d45ae9ce4b329124bc172"} Jan 05 23:32:34 crc kubenswrapper[5034]: I0105 23:32:34.231645 5034 generic.go:334] "Generic (PLEG): container finished" podID="44398740-2fc7-4264-9614-fc0a5fe8e35e" containerID="094ec004486e48d50ca8dbe0ea0db5fafb23bf5863da490408116ee430fa3453" exitCode=0 Jan 05 23:32:34 crc kubenswrapper[5034]: I0105 23:32:34.231752 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vfjct" event={"ID":"44398740-2fc7-4264-9614-fc0a5fe8e35e","Type":"ContainerDied","Data":"094ec004486e48d50ca8dbe0ea0db5fafb23bf5863da490408116ee430fa3453"} Jan 05 23:32:34 crc kubenswrapper[5034]: I0105 23:32:34.235115 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-2tndz" event={"ID":"4eb82084-cf41-47cb-96b1-0824f002f49a","Type":"ContainerStarted","Data":"d0ec125d805af14c11ba52eeb2ea36c5e6fd1316c9447f136c619a373aa508a9"} Jan 05 23:32:34 crc kubenswrapper[5034]: I0105 23:32:34.235343 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:34 crc kubenswrapper[5034]: I0105 23:32:34.297890 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-2tndz" podStartSLOduration=5.890150779 podStartE2EDuration="7.297861229s" podCreationTimestamp="2026-01-05 23:32:27 +0000 UTC" firstStartedPulling="2026-01-05 23:32:28.913046682 +0000 UTC m=+6041.285046121" lastFinishedPulling="2026-01-05 23:32:30.320757132 +0000 UTC m=+6042.692756571" observedRunningTime="2026-01-05 23:32:34.285238711 +0000 UTC m=+6046.657238170" watchObservedRunningTime="2026-01-05 23:32:34.297861229 +0000 UTC m=+6046.669860668" Jan 05 23:32:35 crc kubenswrapper[5034]: I0105 23:32:35.248447 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-vfjct" event={"ID":"44398740-2fc7-4264-9614-fc0a5fe8e35e","Type":"ContainerStarted","Data":"4abf4680c5fd5f3985d92618a5a78408943bd04b655348bac87e1e3f161a4fe0"} Jan 05 23:32:35 crc kubenswrapper[5034]: I0105 23:32:35.250058 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-vfjct" Jan 05 23:32:35 crc kubenswrapper[5034]: I0105 23:32:35.291263 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-vfjct" podStartSLOduration=4.301286117 podStartE2EDuration="6.291238961s" podCreationTimestamp="2026-01-05 23:32:29 +0000 UTC" firstStartedPulling="2026-01-05 23:32:30.866316604 +0000 UTC m=+6043.238316043" lastFinishedPulling="2026-01-05 23:32:32.856269448 +0000 UTC m=+6045.228268887" observedRunningTime="2026-01-05 23:32:35.271297765 +0000 UTC m=+6047.643297204" watchObservedRunningTime="2026-01-05 23:32:35.291238961 +0000 UTC m=+6047.663238400" Jan 05 23:32:41 crc kubenswrapper[5034]: I0105 23:32:41.436982 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-5jdkb" Jan 05 23:32:43 crc kubenswrapper[5034]: I0105 23:32:43.355007 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-2tndz" Jan 05 23:32:45 crc kubenswrapper[5034]: I0105 23:32:45.058148 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-vfjct" Jan 05 23:33:14 crc kubenswrapper[5034]: I0105 23:33:14.048575 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f936-account-create-update-scbns"] Jan 05 23:33:14 crc kubenswrapper[5034]: I0105 23:33:14.060456 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f936-account-create-update-scbns"] Jan 05 23:33:15 crc kubenswrapper[5034]: I0105 23:33:15.040982 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-qjh2z"] Jan 05 23:33:15 crc kubenswrapper[5034]: I0105 23:33:15.054635 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-qjh2z"] Jan 05 23:33:15 crc kubenswrapper[5034]: I0105 23:33:15.854049 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57dc19d8-9194-437d-94a7-191f3a731c2e" path="/var/lib/kubelet/pods/57dc19d8-9194-437d-94a7-191f3a731c2e/volumes" Jan 05 23:33:15 crc kubenswrapper[5034]: I0105 23:33:15.854838 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f" path="/var/lib/kubelet/pods/a6bb3fef-f9db-4ca3-a4f3-bfee9662d80f/volumes" Jan 05 23:33:20 crc kubenswrapper[5034]: I0105 23:33:20.468896 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:33:20 crc kubenswrapper[5034]: I0105 23:33:20.470505 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:33:24 crc kubenswrapper[5034]: I0105 23:33:24.060797 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-l6bzc"] Jan 05 23:33:24 crc kubenswrapper[5034]: I0105 23:33:24.095212 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-l6bzc"] Jan 05 23:33:25 crc kubenswrapper[5034]: I0105 23:33:25.849968 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d37640-0cda-47d4-8f50-5e6ce519ca8e" path="/var/lib/kubelet/pods/e4d37640-0cda-47d4-8f50-5e6ce519ca8e/volumes" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.307674 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78c77cf66f-ld7mf"] Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.313660 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.323787 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78c77cf66f-ld7mf"] Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.324557 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.324944 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.325313 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pbx7z" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.325588 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.341247 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.341840 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a290e39-d771-4f71-9568-489221fc4570" containerName="glance-log" containerID="cri-o://fedd32de1ea0dee23eed370e7e7aca2cd52d121f835ef2dec83aa400a5de5b65" gracePeriod=30 Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.341981 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a290e39-d771-4f71-9568-489221fc4570" containerName="glance-httpd" containerID="cri-o://ad1fc8f8e22ddfbc234556389fe8a1a5a7cd43b27939001523a85ba87750a806" gracePeriod=30 Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.358695 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-scripts\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.358756 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a59d5fe8-ae40-46e7-8d53-86d6facef712-horizon-secret-key\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.358788 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-config-data\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.358878 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59p8x\" (UniqueName: \"kubernetes.io/projected/a59d5fe8-ae40-46e7-8d53-86d6facef712-kube-api-access-59p8x\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.358897 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a59d5fe8-ae40-46e7-8d53-86d6facef712-logs\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.405308 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66dbcfc7c-6qsh2"] Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.411071 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.437437 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66dbcfc7c-6qsh2"] Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460182 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b391b390-6f44-4bbc-b444-f504511bf7aa-logs\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460241 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-scripts\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460276 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-scripts\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460298 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a59d5fe8-ae40-46e7-8d53-86d6facef712-horizon-secret-key\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460329 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b391b390-6f44-4bbc-b444-f504511bf7aa-horizon-secret-key\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460350 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-config-data\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460375 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-config-data\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460399 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgp2k\" (UniqueName: \"kubernetes.io/projected/b391b390-6f44-4bbc-b444-f504511bf7aa-kube-api-access-qgp2k\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460468 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59p8x\" (UniqueName: \"kubernetes.io/projected/a59d5fe8-ae40-46e7-8d53-86d6facef712-kube-api-access-59p8x\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460485 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a59d5fe8-ae40-46e7-8d53-86d6facef712-logs\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.460900 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a59d5fe8-ae40-46e7-8d53-86d6facef712-logs\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.462817 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-config-data\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.465489 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-scripts\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.472898 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a59d5fe8-ae40-46e7-8d53-86d6facef712-horizon-secret-key\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.488479 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.489149 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerName="glance-log" containerID="cri-o://f2c3e58355fc4aeeedb44875a1397f73b9aa25aeebd23b3218cd077c5a3328cd" gracePeriod=30 Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.489965 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerName="glance-httpd" containerID="cri-o://818f8dadc8311256610a142e506fba4ca86dd84aee1a66ac132afb8416098e21" gracePeriod=30 Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.498744 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59p8x\" (UniqueName: \"kubernetes.io/projected/a59d5fe8-ae40-46e7-8d53-86d6facef712-kube-api-access-59p8x\") pod \"horizon-78c77cf66f-ld7mf\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.562382 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgp2k\" (UniqueName: \"kubernetes.io/projected/b391b390-6f44-4bbc-b444-f504511bf7aa-kube-api-access-qgp2k\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.562784 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b391b390-6f44-4bbc-b444-f504511bf7aa-logs\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.562905 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-scripts\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.563019 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b391b390-6f44-4bbc-b444-f504511bf7aa-horizon-secret-key\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.563127 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-config-data\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.563312 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b391b390-6f44-4bbc-b444-f504511bf7aa-logs\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.563664 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-scripts\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.564485 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-config-data\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.567394 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b391b390-6f44-4bbc-b444-f504511bf7aa-horizon-secret-key\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.580184 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgp2k\" (UniqueName: \"kubernetes.io/projected/b391b390-6f44-4bbc-b444-f504511bf7aa-kube-api-access-qgp2k\") pod \"horizon-66dbcfc7c-6qsh2\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.648307 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.734443 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.919780 5034 generic.go:334] "Generic (PLEG): container finished" podID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerID="f2c3e58355fc4aeeedb44875a1397f73b9aa25aeebd23b3218cd077c5a3328cd" exitCode=143 Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.919882 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46cfc4db-fd97-43d5-b21e-39d6059528a2","Type":"ContainerDied","Data":"f2c3e58355fc4aeeedb44875a1397f73b9aa25aeebd23b3218cd077c5a3328cd"} Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.922847 5034 generic.go:334] "Generic (PLEG): container finished" podID="5a290e39-d771-4f71-9568-489221fc4570" containerID="fedd32de1ea0dee23eed370e7e7aca2cd52d121f835ef2dec83aa400a5de5b65" exitCode=143 Jan 05 23:33:37 crc kubenswrapper[5034]: I0105 23:33:37.922881 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a290e39-d771-4f71-9568-489221fc4570","Type":"ContainerDied","Data":"fedd32de1ea0dee23eed370e7e7aca2cd52d121f835ef2dec83aa400a5de5b65"} Jan 05 23:33:38 crc kubenswrapper[5034]: I0105 23:33:38.144516 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78c77cf66f-ld7mf"] Jan 05 23:33:38 crc kubenswrapper[5034]: W0105 23:33:38.146323 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda59d5fe8_ae40_46e7_8d53_86d6facef712.slice/crio-bf6684ecf5d630c9a9230291739628e3f15096b4d585977eaf419f047059f475 WatchSource:0}: Error finding container bf6684ecf5d630c9a9230291739628e3f15096b4d585977eaf419f047059f475: Status 404 returned error can't find the container with id bf6684ecf5d630c9a9230291739628e3f15096b4d585977eaf419f047059f475 Jan 05 23:33:38 crc kubenswrapper[5034]: I0105 23:33:38.308104 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66dbcfc7c-6qsh2"] Jan 05 23:33:38 crc kubenswrapper[5034]: W0105 23:33:38.342097 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb391b390_6f44_4bbc_b444_f504511bf7aa.slice/crio-e438a4ae854b7904a6ca6a365b2944fecfe368b317679feff63bc749f444313a WatchSource:0}: Error finding container e438a4ae854b7904a6ca6a365b2944fecfe368b317679feff63bc749f444313a: Status 404 returned error can't find the container with id e438a4ae854b7904a6ca6a365b2944fecfe368b317679feff63bc749f444313a Jan 05 23:33:38 crc kubenswrapper[5034]: I0105 23:33:38.962662 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66dbcfc7c-6qsh2" event={"ID":"b391b390-6f44-4bbc-b444-f504511bf7aa","Type":"ContainerStarted","Data":"e438a4ae854b7904a6ca6a365b2944fecfe368b317679feff63bc749f444313a"} Jan 05 23:33:38 crc kubenswrapper[5034]: I0105 23:33:38.963831 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78c77cf66f-ld7mf" event={"ID":"a59d5fe8-ae40-46e7-8d53-86d6facef712","Type":"ContainerStarted","Data":"bf6684ecf5d630c9a9230291739628e3f15096b4d585977eaf419f047059f475"} Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.767808 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66dbcfc7c-6qsh2"] Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.803848 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bf57b7474-sp5s9"] Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.815956 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.833514 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.946172 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bf57b7474-sp5s9"] Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.975405 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-combined-ca-bundle\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.975539 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-secret-key\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.975575 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-tls-certs\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.976021 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4rdm\" (UniqueName: \"kubernetes.io/projected/0ea2d3fc-193c-497f-9d06-42f9902c818e-kube-api-access-t4rdm\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.976086 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-scripts\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.976256 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea2d3fc-193c-497f-9d06-42f9902c818e-logs\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.976328 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-config-data\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.980815 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78c77cf66f-ld7mf"] Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.994303 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7966f5c6c6-ct6c7"] Jan 05 23:33:39 crc kubenswrapper[5034]: I0105 23:33:39.999210 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.019521 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7966f5c6c6-ct6c7"] Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.081145 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-combined-ca-bundle\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.081227 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-secret-key\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.081262 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-tls-certs\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.081306 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4rdm\" (UniqueName: \"kubernetes.io/projected/0ea2d3fc-193c-497f-9d06-42f9902c818e-kube-api-access-t4rdm\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.081334 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-scripts\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.081405 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea2d3fc-193c-497f-9d06-42f9902c818e-logs\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.081442 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-config-data\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.082446 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea2d3fc-193c-497f-9d06-42f9902c818e-logs\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.082812 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-scripts\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.086220 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-config-data\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.088650 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-combined-ca-bundle\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.094501 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-secret-key\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.097955 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4rdm\" (UniqueName: \"kubernetes.io/projected/0ea2d3fc-193c-497f-9d06-42f9902c818e-kube-api-access-t4rdm\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.108447 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-tls-certs\") pod \"horizon-7bf57b7474-sp5s9\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.184701 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-config-data\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.184782 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/171b4c86-ff76-4145-9324-c0c5a501e968-logs\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.185652 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-tls-certs\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.185689 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-scripts\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.185812 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-combined-ca-bundle\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.185856 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-secret-key\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.185880 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttmbt\" (UniqueName: \"kubernetes.io/projected/171b4c86-ff76-4145-9324-c0c5a501e968-kube-api-access-ttmbt\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.188625 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.290677 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-combined-ca-bundle\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.290753 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-secret-key\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.290788 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttmbt\" (UniqueName: \"kubernetes.io/projected/171b4c86-ff76-4145-9324-c0c5a501e968-kube-api-access-ttmbt\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.290841 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-config-data\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.290890 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/171b4c86-ff76-4145-9324-c0c5a501e968-logs\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.290987 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-tls-certs\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.291012 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-scripts\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.292067 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-scripts\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.292909 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/171b4c86-ff76-4145-9324-c0c5a501e968-logs\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.293450 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-config-data\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.306440 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-secret-key\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.307754 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-tls-certs\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.311567 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-combined-ca-bundle\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.315885 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttmbt\" (UniqueName: \"kubernetes.io/projected/171b4c86-ff76-4145-9324-c0c5a501e968-kube-api-access-ttmbt\") pod \"horizon-7966f5c6c6-ct6c7\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:40 crc kubenswrapper[5034]: I0105 23:33:40.346183 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:40.670971 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.1.59:9292/healthcheck\": read tcp 10.217.0.2:52976->10.217.1.59:9292: read: connection reset by peer" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:40.671033 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.1.59:9292/healthcheck\": read tcp 10.217.0.2:52964->10.217.1.59:9292: read: connection reset by peer" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:40.725807 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bf57b7474-sp5s9"] Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:40.753690 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5a290e39-d771-4f71-9568-489221fc4570" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.1.60:9292/healthcheck\": read tcp 10.217.0.2:54294->10.217.1.60:9292: read: connection reset by peer" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:40.753754 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5a290e39-d771-4f71-9568-489221fc4570" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.1.60:9292/healthcheck\": read tcp 10.217.0.2:54308->10.217.1.60:9292: read: connection reset by peer" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:40.870760 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7966f5c6c6-ct6c7"] Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.012560 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf57b7474-sp5s9" event={"ID":"0ea2d3fc-193c-497f-9d06-42f9902c818e","Type":"ContainerStarted","Data":"6ff1936fdd08592bfe8d0a0a58ff40e21bf9ee2eaca59ef895467762bbc1878b"} Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.019275 5034 generic.go:334] "Generic (PLEG): container finished" podID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerID="818f8dadc8311256610a142e506fba4ca86dd84aee1a66ac132afb8416098e21" exitCode=0 Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.019373 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46cfc4db-fd97-43d5-b21e-39d6059528a2","Type":"ContainerDied","Data":"818f8dadc8311256610a142e506fba4ca86dd84aee1a66ac132afb8416098e21"} Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.029576 5034 generic.go:334] "Generic (PLEG): container finished" podID="5a290e39-d771-4f71-9568-489221fc4570" containerID="ad1fc8f8e22ddfbc234556389fe8a1a5a7cd43b27939001523a85ba87750a806" exitCode=0 Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.029690 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a290e39-d771-4f71-9568-489221fc4570","Type":"ContainerDied","Data":"ad1fc8f8e22ddfbc234556389fe8a1a5a7cd43b27939001523a85ba87750a806"} Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.032459 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7966f5c6c6-ct6c7" event={"ID":"171b4c86-ff76-4145-9324-c0c5a501e968","Type":"ContainerStarted","Data":"1b0bb73385f515e3a26949867636f58e0f6a7308c48897b382c70f7a5787627c"} Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.816711 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.830512 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.957500 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-public-tls-certs\") pod \"5a290e39-d771-4f71-9568-489221fc4570\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.957732 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-scripts\") pod \"46cfc4db-fd97-43d5-b21e-39d6059528a2\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.957842 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-config-data\") pod \"5a290e39-d771-4f71-9568-489221fc4570\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.957895 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-combined-ca-bundle\") pod \"46cfc4db-fd97-43d5-b21e-39d6059528a2\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.960340 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-httpd-run\") pod \"5a290e39-d771-4f71-9568-489221fc4570\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.960426 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-logs\") pod \"46cfc4db-fd97-43d5-b21e-39d6059528a2\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.960513 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-httpd-run\") pod \"46cfc4db-fd97-43d5-b21e-39d6059528a2\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.960547 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-logs\") pod \"5a290e39-d771-4f71-9568-489221fc4570\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.960588 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvq4w\" (UniqueName: \"kubernetes.io/projected/46cfc4db-fd97-43d5-b21e-39d6059528a2-kube-api-access-vvq4w\") pod \"46cfc4db-fd97-43d5-b21e-39d6059528a2\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.960776 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-config-data\") pod \"46cfc4db-fd97-43d5-b21e-39d6059528a2\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.960822 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-combined-ca-bundle\") pod \"5a290e39-d771-4f71-9568-489221fc4570\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.960862 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-scripts\") pod \"5a290e39-d771-4f71-9568-489221fc4570\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.960901 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-internal-tls-certs\") pod \"46cfc4db-fd97-43d5-b21e-39d6059528a2\" (UID: \"46cfc4db-fd97-43d5-b21e-39d6059528a2\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.960932 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cpqd\" (UniqueName: \"kubernetes.io/projected/5a290e39-d771-4f71-9568-489221fc4570-kube-api-access-9cpqd\") pod \"5a290e39-d771-4f71-9568-489221fc4570\" (UID: \"5a290e39-d771-4f71-9568-489221fc4570\") " Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.961244 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5a290e39-d771-4f71-9568-489221fc4570" (UID: "5a290e39-d771-4f71-9568-489221fc4570"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.962193 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.967692 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46cfc4db-fd97-43d5-b21e-39d6059528a2-kube-api-access-vvq4w" (OuterVolumeSpecName: "kube-api-access-vvq4w") pod "46cfc4db-fd97-43d5-b21e-39d6059528a2" (UID: "46cfc4db-fd97-43d5-b21e-39d6059528a2"). InnerVolumeSpecName "kube-api-access-vvq4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.968104 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-logs" (OuterVolumeSpecName: "logs") pod "46cfc4db-fd97-43d5-b21e-39d6059528a2" (UID: "46cfc4db-fd97-43d5-b21e-39d6059528a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.968529 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-logs" (OuterVolumeSpecName: "logs") pod "5a290e39-d771-4f71-9568-489221fc4570" (UID: "5a290e39-d771-4f71-9568-489221fc4570"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.968521 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "46cfc4db-fd97-43d5-b21e-39d6059528a2" (UID: "46cfc4db-fd97-43d5-b21e-39d6059528a2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.975900 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-scripts" (OuterVolumeSpecName: "scripts") pod "46cfc4db-fd97-43d5-b21e-39d6059528a2" (UID: "46cfc4db-fd97-43d5-b21e-39d6059528a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.980399 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-scripts" (OuterVolumeSpecName: "scripts") pod "5a290e39-d771-4f71-9568-489221fc4570" (UID: "5a290e39-d771-4f71-9568-489221fc4570"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:33:41 crc kubenswrapper[5034]: I0105 23:33:41.999561 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46cfc4db-fd97-43d5-b21e-39d6059528a2" (UID: "46cfc4db-fd97-43d5-b21e-39d6059528a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.000141 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a290e39-d771-4f71-9568-489221fc4570-kube-api-access-9cpqd" (OuterVolumeSpecName: "kube-api-access-9cpqd") pod "5a290e39-d771-4f71-9568-489221fc4570" (UID: "5a290e39-d771-4f71-9568-489221fc4570"). InnerVolumeSpecName "kube-api-access-9cpqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.040508 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a290e39-d771-4f71-9568-489221fc4570" (UID: "5a290e39-d771-4f71-9568-489221fc4570"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.065468 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.065504 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.065515 5034 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46cfc4db-fd97-43d5-b21e-39d6059528a2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.065524 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a290e39-d771-4f71-9568-489221fc4570-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.065538 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvq4w\" (UniqueName: \"kubernetes.io/projected/46cfc4db-fd97-43d5-b21e-39d6059528a2-kube-api-access-vvq4w\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.065547 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.065556 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.065575 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cpqd\" (UniqueName: \"kubernetes.io/projected/5a290e39-d771-4f71-9568-489221fc4570-kube-api-access-9cpqd\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.065586 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.068647 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46cfc4db-fd97-43d5-b21e-39d6059528a2","Type":"ContainerDied","Data":"e62c63e63ac639a77d0afe9cf32b785959a728eb0e35e7efe060d88877d17d7f"} Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.068709 5034 scope.go:117] "RemoveContainer" containerID="818f8dadc8311256610a142e506fba4ca86dd84aee1a66ac132afb8416098e21" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.068947 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.083033 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-config-data" (OuterVolumeSpecName: "config-data") pod "5a290e39-d771-4f71-9568-489221fc4570" (UID: "5a290e39-d771-4f71-9568-489221fc4570"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.087853 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a290e39-d771-4f71-9568-489221fc4570","Type":"ContainerDied","Data":"cf2d2d84a47dd4b9e1c9f4a0428c87a56c45db8f9c8b203be8b0088347b92da6"} Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.088121 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.099202 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-config-data" (OuterVolumeSpecName: "config-data") pod "46cfc4db-fd97-43d5-b21e-39d6059528a2" (UID: "46cfc4db-fd97-43d5-b21e-39d6059528a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.125336 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5a290e39-d771-4f71-9568-489221fc4570" (UID: "5a290e39-d771-4f71-9568-489221fc4570"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.150677 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "46cfc4db-fd97-43d5-b21e-39d6059528a2" (UID: "46cfc4db-fd97-43d5-b21e-39d6059528a2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.167805 5034 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.167843 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a290e39-d771-4f71-9568-489221fc4570-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.167856 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.167869 5034 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cfc4db-fd97-43d5-b21e-39d6059528a2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.428903 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.446446 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.459341 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:33:42 crc kubenswrapper[5034]: E0105 23:33:42.459834 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a290e39-d771-4f71-9568-489221fc4570" containerName="glance-log" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.459857 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a290e39-d771-4f71-9568-489221fc4570" containerName="glance-log" Jan 05 23:33:42 crc kubenswrapper[5034]: E0105 23:33:42.459887 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a290e39-d771-4f71-9568-489221fc4570" containerName="glance-httpd" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.459894 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a290e39-d771-4f71-9568-489221fc4570" containerName="glance-httpd" Jan 05 23:33:42 crc kubenswrapper[5034]: E0105 23:33:42.459906 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerName="glance-log" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.459913 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerName="glance-log" Jan 05 23:33:42 crc kubenswrapper[5034]: E0105 23:33:42.459928 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerName="glance-httpd" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.459934 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerName="glance-httpd" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.460127 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a290e39-d771-4f71-9568-489221fc4570" containerName="glance-httpd" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.460144 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerName="glance-log" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.460161 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" containerName="glance-httpd" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.460175 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a290e39-d771-4f71-9568-489221fc4570" containerName="glance-log" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.461437 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.465691 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.466509 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.466706 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r5jhl" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.469012 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.469546 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.479936 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.492375 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.505202 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.507764 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.511063 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.511275 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.532746 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.581507 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.581819 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.582049 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-logs\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.582267 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrffb\" (UniqueName: \"kubernetes.io/projected/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-kube-api-access-jrffb\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.582502 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.582704 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.582918 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685164 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-scripts\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685251 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk4pl\" (UniqueName: \"kubernetes.io/projected/4fd7feb8-6291-4928-bf9b-534253512819-kube-api-access-vk4pl\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685284 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-config-data\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685315 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fd7feb8-6291-4928-bf9b-534253512819-logs\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685430 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685455 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685526 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-logs\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685564 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd7feb8-6291-4928-bf9b-534253512819-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685608 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrffb\" (UniqueName: \"kubernetes.io/projected/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-kube-api-access-jrffb\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685648 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685691 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685731 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685782 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.685826 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.687260 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-logs\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.687959 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.694984 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.695142 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.696940 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.697703 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.726675 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrffb\" (UniqueName: \"kubernetes.io/projected/5cee0d46-4ed0-4fc9-9f55-f035ef40fecc-kube-api-access-jrffb\") pod \"glance-default-internal-api-0\" (UID: \"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc\") " pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.791192 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd7feb8-6291-4928-bf9b-534253512819-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.791340 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.791423 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.791562 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-scripts\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.791601 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk4pl\" (UniqueName: \"kubernetes.io/projected/4fd7feb8-6291-4928-bf9b-534253512819-kube-api-access-vk4pl\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.791633 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-config-data\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.791663 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fd7feb8-6291-4928-bf9b-534253512819-logs\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.794058 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd7feb8-6291-4928-bf9b-534253512819-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.795162 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fd7feb8-6291-4928-bf9b-534253512819-logs\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.799488 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.802949 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.803640 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-config-data\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.804622 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.809531 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fd7feb8-6291-4928-bf9b-534253512819-scripts\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.817021 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk4pl\" (UniqueName: \"kubernetes.io/projected/4fd7feb8-6291-4928-bf9b-534253512819-kube-api-access-vk4pl\") pod \"glance-default-external-api-0\" (UID: \"4fd7feb8-6291-4928-bf9b-534253512819\") " pod="openstack/glance-default-external-api-0" Jan 05 23:33:42 crc kubenswrapper[5034]: I0105 23:33:42.847598 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 23:33:43 crc kubenswrapper[5034]: I0105 23:33:43.860150 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46cfc4db-fd97-43d5-b21e-39d6059528a2" path="/var/lib/kubelet/pods/46cfc4db-fd97-43d5-b21e-39d6059528a2/volumes" Jan 05 23:33:43 crc kubenswrapper[5034]: I0105 23:33:43.862918 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a290e39-d771-4f71-9568-489221fc4570" path="/var/lib/kubelet/pods/5a290e39-d771-4f71-9568-489221fc4570/volumes" Jan 05 23:33:47 crc kubenswrapper[5034]: I0105 23:33:47.132667 5034 scope.go:117] "RemoveContainer" containerID="f2c3e58355fc4aeeedb44875a1397f73b9aa25aeebd23b3218cd077c5a3328cd" Jan 05 23:33:47 crc kubenswrapper[5034]: I0105 23:33:47.251854 5034 scope.go:117] "RemoveContainer" containerID="ad1fc8f8e22ddfbc234556389fe8a1a5a7cd43b27939001523a85ba87750a806" Jan 05 23:33:47 crc kubenswrapper[5034]: I0105 23:33:47.388324 5034 scope.go:117] "RemoveContainer" containerID="fedd32de1ea0dee23eed370e7e7aca2cd52d121f835ef2dec83aa400a5de5b65" Jan 05 23:33:47 crc kubenswrapper[5034]: I0105 23:33:47.789822 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 23:33:47 crc kubenswrapper[5034]: I0105 23:33:47.975675 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.169699 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7966f5c6c6-ct6c7" event={"ID":"171b4c86-ff76-4145-9324-c0c5a501e968","Type":"ContainerStarted","Data":"e411a3eab2b7ca31e9c58eb4c15d002ed9d5332f570afe081989f59929ee4331"} Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.170074 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7966f5c6c6-ct6c7" event={"ID":"171b4c86-ff76-4145-9324-c0c5a501e968","Type":"ContainerStarted","Data":"c848d8cc43e918f1830ecc176e77215c84e7ec172bf4cb09ccb4b241f3829099"} Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.174774 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78c77cf66f-ld7mf" event={"ID":"a59d5fe8-ae40-46e7-8d53-86d6facef712","Type":"ContainerStarted","Data":"84b88905f30f9f29c945ad6b42f6bcc84a04487da3a146821bebde81496910dd"} Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.174818 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78c77cf66f-ld7mf" event={"ID":"a59d5fe8-ae40-46e7-8d53-86d6facef712","Type":"ContainerStarted","Data":"638c5ff766acd63968bfe7c51957902177e214312f745bb76a28552afff017af"} Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.174856 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78c77cf66f-ld7mf" podUID="a59d5fe8-ae40-46e7-8d53-86d6facef712" containerName="horizon-log" containerID="cri-o://638c5ff766acd63968bfe7c51957902177e214312f745bb76a28552afff017af" gracePeriod=30 Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.174908 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78c77cf66f-ld7mf" podUID="a59d5fe8-ae40-46e7-8d53-86d6facef712" containerName="horizon" containerID="cri-o://84b88905f30f9f29c945ad6b42f6bcc84a04487da3a146821bebde81496910dd" gracePeriod=30 Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.179038 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fd7feb8-6291-4928-bf9b-534253512819","Type":"ContainerStarted","Data":"f16e39ea924ee7433a6326167a832495fd0d75eab821e2211befe564774a655a"} Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.182170 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf57b7474-sp5s9" event={"ID":"0ea2d3fc-193c-497f-9d06-42f9902c818e","Type":"ContainerStarted","Data":"afb1ccf19a436dff6531f5c50da289ec669db82f0b2dc659dcfdcc987654717d"} Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.182211 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf57b7474-sp5s9" event={"ID":"0ea2d3fc-193c-497f-9d06-42f9902c818e","Type":"ContainerStarted","Data":"f0524a9770bf3e5791e8fe4c07737aeea0773d857d16fda09849f22e19c9bc7e"} Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.185381 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66dbcfc7c-6qsh2" event={"ID":"b391b390-6f44-4bbc-b444-f504511bf7aa","Type":"ContainerStarted","Data":"3c9a31fdd84df109c291954a36aec019e049367ccb73bb8f644f1ed677b5f7b8"} Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.185422 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66dbcfc7c-6qsh2" event={"ID":"b391b390-6f44-4bbc-b444-f504511bf7aa","Type":"ContainerStarted","Data":"30aff88540cdc3a8d2c5815292118b034cfac20451cdd29999b44af862a3c3e7"} Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.185559 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66dbcfc7c-6qsh2" podUID="b391b390-6f44-4bbc-b444-f504511bf7aa" containerName="horizon-log" containerID="cri-o://30aff88540cdc3a8d2c5815292118b034cfac20451cdd29999b44af862a3c3e7" gracePeriod=30 Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.185670 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66dbcfc7c-6qsh2" podUID="b391b390-6f44-4bbc-b444-f504511bf7aa" containerName="horizon" containerID="cri-o://3c9a31fdd84df109c291954a36aec019e049367ccb73bb8f644f1ed677b5f7b8" gracePeriod=30 Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.187483 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc","Type":"ContainerStarted","Data":"dcc63191c20be604ad7ee2ed38daa67f5502a1dcd22af1f404efbe55a19bef45"} Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.230311 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7966f5c6c6-ct6c7" podStartSLOduration=2.857426432 podStartE2EDuration="9.23028399s" podCreationTimestamp="2026-01-05 23:33:39 +0000 UTC" firstStartedPulling="2026-01-05 23:33:40.910758496 +0000 UTC m=+6113.282757925" lastFinishedPulling="2026-01-05 23:33:47.283616044 +0000 UTC m=+6119.655615483" observedRunningTime="2026-01-05 23:33:48.198232191 +0000 UTC m=+6120.570231630" watchObservedRunningTime="2026-01-05 23:33:48.23028399 +0000 UTC m=+6120.602283429" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.254203 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66dbcfc7c-6qsh2" podStartSLOduration=2.2849146559999998 podStartE2EDuration="11.254184018s" podCreationTimestamp="2026-01-05 23:33:37 +0000 UTC" firstStartedPulling="2026-01-05 23:33:38.345072664 +0000 UTC m=+6110.717072103" lastFinishedPulling="2026-01-05 23:33:47.314342026 +0000 UTC m=+6119.686341465" observedRunningTime="2026-01-05 23:33:48.252848421 +0000 UTC m=+6120.624847860" watchObservedRunningTime="2026-01-05 23:33:48.254184018 +0000 UTC m=+6120.626183447" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.264597 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78c77cf66f-ld7mf" podStartSLOduration=2.123725221 podStartE2EDuration="11.264574653s" podCreationTimestamp="2026-01-05 23:33:37 +0000 UTC" firstStartedPulling="2026-01-05 23:33:38.149489753 +0000 UTC m=+6110.521489192" lastFinishedPulling="2026-01-05 23:33:47.290339185 +0000 UTC m=+6119.662338624" observedRunningTime="2026-01-05 23:33:48.226862443 +0000 UTC m=+6120.598861882" watchObservedRunningTime="2026-01-05 23:33:48.264574653 +0000 UTC m=+6120.636574092" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.297228 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bf57b7474-sp5s9" podStartSLOduration=2.802933855 podStartE2EDuration="9.297202939s" podCreationTimestamp="2026-01-05 23:33:39 +0000 UTC" firstStartedPulling="2026-01-05 23:33:40.836933651 +0000 UTC m=+6113.208933090" lastFinishedPulling="2026-01-05 23:33:47.331202725 +0000 UTC m=+6119.703202174" observedRunningTime="2026-01-05 23:33:48.274347911 +0000 UTC m=+6120.646347350" watchObservedRunningTime="2026-01-05 23:33:48.297202939 +0000 UTC m=+6120.669202378" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.378464 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nwctl"] Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.383441 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.389713 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwctl"] Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.552175 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-catalog-content\") pod \"redhat-marketplace-nwctl\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.552548 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-utilities\") pod \"redhat-marketplace-nwctl\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.552604 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjdmn\" (UniqueName: \"kubernetes.io/projected/70eae5f9-cb89-41bb-8da1-6665c07a74b4-kube-api-access-wjdmn\") pod \"redhat-marketplace-nwctl\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.654817 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-catalog-content\") pod \"redhat-marketplace-nwctl\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.654878 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-utilities\") pod \"redhat-marketplace-nwctl\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.654913 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjdmn\" (UniqueName: \"kubernetes.io/projected/70eae5f9-cb89-41bb-8da1-6665c07a74b4-kube-api-access-wjdmn\") pod \"redhat-marketplace-nwctl\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.655892 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-catalog-content\") pod \"redhat-marketplace-nwctl\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.656284 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-utilities\") pod \"redhat-marketplace-nwctl\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.674541 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjdmn\" (UniqueName: \"kubernetes.io/projected/70eae5f9-cb89-41bb-8da1-6665c07a74b4-kube-api-access-wjdmn\") pod \"redhat-marketplace-nwctl\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:48 crc kubenswrapper[5034]: I0105 23:33:48.720923 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:49 crc kubenswrapper[5034]: I0105 23:33:49.245565 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc","Type":"ContainerStarted","Data":"abe7b5664a730510178a8eba5aed066474c559a1d8e3fa19a52800bec0f96b99"} Jan 05 23:33:49 crc kubenswrapper[5034]: I0105 23:33:49.258701 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fd7feb8-6291-4928-bf9b-534253512819","Type":"ContainerStarted","Data":"764a00a11c004ef4a40559514c2e1f271a06bd1728493fb1c222e991442be4c4"} Jan 05 23:33:49 crc kubenswrapper[5034]: I0105 23:33:49.441245 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwctl"] Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.189589 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.190063 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.287977 5034 generic.go:334] "Generic (PLEG): container finished" podID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerID="2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d" exitCode=0 Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.288190 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwctl" event={"ID":"70eae5f9-cb89-41bb-8da1-6665c07a74b4","Type":"ContainerDied","Data":"2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d"} Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.288264 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwctl" event={"ID":"70eae5f9-cb89-41bb-8da1-6665c07a74b4","Type":"ContainerStarted","Data":"959501709a5e14bd6c5a7388ddd0f7cb097d67767f02bd61f5e991d6f9721615"} Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.303055 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fd7feb8-6291-4928-bf9b-534253512819","Type":"ContainerStarted","Data":"f6585e38bc6ac93c3d7fd618391da216744364fdcfeee177d28b47d2c1647d14"} Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.315202 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cee0d46-4ed0-4fc9-9f55-f035ef40fecc","Type":"ContainerStarted","Data":"b132ee42a2088bee65371cae442bb7869d4cb77dc0ddeae53ad7343bc5472b25"} Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.347088 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.352035 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.359598 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.359572998 podStartE2EDuration="8.359572998s" podCreationTimestamp="2026-01-05 23:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:33:50.34945059 +0000 UTC m=+6122.721450029" watchObservedRunningTime="2026-01-05 23:33:50.359572998 +0000 UTC m=+6122.731572437" Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.378907 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.378870475 podStartE2EDuration="8.378870475s" podCreationTimestamp="2026-01-05 23:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:33:50.371375573 +0000 UTC m=+6122.743375012" watchObservedRunningTime="2026-01-05 23:33:50.378870475 +0000 UTC m=+6122.750869914" Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.468611 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:33:50 crc kubenswrapper[5034]: I0105 23:33:50.468691 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:33:51 crc kubenswrapper[5034]: I0105 23:33:51.049767 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-909f-account-create-update-vzhr2"] Jan 05 23:33:51 crc kubenswrapper[5034]: I0105 23:33:51.060326 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-w28jt"] Jan 05 23:33:51 crc kubenswrapper[5034]: I0105 23:33:51.070503 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-w28jt"] Jan 05 23:33:51 crc kubenswrapper[5034]: I0105 23:33:51.079942 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-909f-account-create-update-vzhr2"] Jan 05 23:33:51 crc kubenswrapper[5034]: I0105 23:33:51.851823 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440a3282-7a5d-4fcd-af1d-89d6129d0cdb" path="/var/lib/kubelet/pods/440a3282-7a5d-4fcd-af1d-89d6129d0cdb/volumes" Jan 05 23:33:51 crc kubenswrapper[5034]: I0105 23:33:51.853646 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76ed38be-ed9b-4d7d-91d1-c5435d7621eb" path="/var/lib/kubelet/pods/76ed38be-ed9b-4d7d-91d1-c5435d7621eb/volumes" Jan 05 23:33:52 crc kubenswrapper[5034]: I0105 23:33:52.334002 5034 generic.go:334] "Generic (PLEG): container finished" podID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerID="cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d" exitCode=0 Jan 05 23:33:52 crc kubenswrapper[5034]: I0105 23:33:52.334118 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwctl" event={"ID":"70eae5f9-cb89-41bb-8da1-6665c07a74b4","Type":"ContainerDied","Data":"cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d"} Jan 05 23:33:52 crc kubenswrapper[5034]: I0105 23:33:52.803214 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:52 crc kubenswrapper[5034]: I0105 23:33:52.804531 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:52 crc kubenswrapper[5034]: I0105 23:33:52.845830 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:52 crc kubenswrapper[5034]: I0105 23:33:52.846976 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:52 crc kubenswrapper[5034]: I0105 23:33:52.848006 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 23:33:52 crc kubenswrapper[5034]: I0105 23:33:52.848028 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 23:33:52 crc kubenswrapper[5034]: I0105 23:33:52.904557 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 23:33:52 crc kubenswrapper[5034]: I0105 23:33:52.923776 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 23:33:53 crc kubenswrapper[5034]: I0105 23:33:53.348924 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:53 crc kubenswrapper[5034]: I0105 23:33:53.348964 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 23:33:53 crc kubenswrapper[5034]: I0105 23:33:53.348978 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:53 crc kubenswrapper[5034]: I0105 23:33:53.348987 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 23:33:54 crc kubenswrapper[5034]: I0105 23:33:54.366658 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwctl" event={"ID":"70eae5f9-cb89-41bb-8da1-6665c07a74b4","Type":"ContainerStarted","Data":"647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413"} Jan 05 23:33:54 crc kubenswrapper[5034]: I0105 23:33:54.394693 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nwctl" podStartSLOduration=3.577764839 podStartE2EDuration="6.394659251s" podCreationTimestamp="2026-01-05 23:33:48 +0000 UTC" firstStartedPulling="2026-01-05 23:33:50.295844259 +0000 UTC m=+6122.667843698" lastFinishedPulling="2026-01-05 23:33:53.112738671 +0000 UTC m=+6125.484738110" observedRunningTime="2026-01-05 23:33:54.389546246 +0000 UTC m=+6126.761545705" watchObservedRunningTime="2026-01-05 23:33:54.394659251 +0000 UTC m=+6126.766658690" Jan 05 23:33:55 crc kubenswrapper[5034]: I0105 23:33:55.375014 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 23:33:55 crc kubenswrapper[5034]: I0105 23:33:55.375089 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 23:33:55 crc kubenswrapper[5034]: I0105 23:33:55.663647 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 23:33:55 crc kubenswrapper[5034]: I0105 23:33:55.664654 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 23:33:55 crc kubenswrapper[5034]: I0105 23:33:55.780061 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:56 crc kubenswrapper[5034]: I0105 23:33:56.382987 5034 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 23:33:56 crc kubenswrapper[5034]: I0105 23:33:56.943621 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 23:33:57 crc kubenswrapper[5034]: I0105 23:33:57.649203 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:33:57 crc kubenswrapper[5034]: I0105 23:33:57.734997 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:33:58 crc kubenswrapper[5034]: I0105 23:33:58.722320 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:58 crc kubenswrapper[5034]: I0105 23:33:58.722652 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:58 crc kubenswrapper[5034]: I0105 23:33:58.779715 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:59 crc kubenswrapper[5034]: I0105 23:33:59.033605 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hth4l"] Jan 05 23:33:59 crc kubenswrapper[5034]: I0105 23:33:59.044401 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hth4l"] Jan 05 23:33:59 crc kubenswrapper[5034]: I0105 23:33:59.462886 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:33:59 crc kubenswrapper[5034]: I0105 23:33:59.516570 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwctl"] Jan 05 23:33:59 crc kubenswrapper[5034]: I0105 23:33:59.870857 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593f1cee-9783-4841-a32b-1335a0c115fd" path="/var/lib/kubelet/pods/593f1cee-9783-4841-a32b-1335a0c115fd/volumes" Jan 05 23:34:00 crc kubenswrapper[5034]: I0105 23:34:00.191593 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bf57b7474-sp5s9" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.115:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8443: connect: connection refused" Jan 05 23:34:00 crc kubenswrapper[5034]: I0105 23:34:00.350194 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7966f5c6c6-ct6c7" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.116:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8443: connect: connection refused" Jan 05 23:34:01 crc kubenswrapper[5034]: I0105 23:34:01.436108 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nwctl" podUID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerName="registry-server" containerID="cri-o://647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413" gracePeriod=2 Jan 05 23:34:01 crc kubenswrapper[5034]: I0105 23:34:01.986276 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.158883 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-utilities\") pod \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.158950 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-catalog-content\") pod \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.158999 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjdmn\" (UniqueName: \"kubernetes.io/projected/70eae5f9-cb89-41bb-8da1-6665c07a74b4-kube-api-access-wjdmn\") pod \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\" (UID: \"70eae5f9-cb89-41bb-8da1-6665c07a74b4\") " Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.160865 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-utilities" (OuterVolumeSpecName: "utilities") pod "70eae5f9-cb89-41bb-8da1-6665c07a74b4" (UID: "70eae5f9-cb89-41bb-8da1-6665c07a74b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.164941 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70eae5f9-cb89-41bb-8da1-6665c07a74b4-kube-api-access-wjdmn" (OuterVolumeSpecName: "kube-api-access-wjdmn") pod "70eae5f9-cb89-41bb-8da1-6665c07a74b4" (UID: "70eae5f9-cb89-41bb-8da1-6665c07a74b4"). InnerVolumeSpecName "kube-api-access-wjdmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.180720 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70eae5f9-cb89-41bb-8da1-6665c07a74b4" (UID: "70eae5f9-cb89-41bb-8da1-6665c07a74b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.261779 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.261814 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70eae5f9-cb89-41bb-8da1-6665c07a74b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.261828 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjdmn\" (UniqueName: \"kubernetes.io/projected/70eae5f9-cb89-41bb-8da1-6665c07a74b4-kube-api-access-wjdmn\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.447687 5034 generic.go:334] "Generic (PLEG): container finished" podID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerID="647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413" exitCode=0 Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.447748 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwctl" event={"ID":"70eae5f9-cb89-41bb-8da1-6665c07a74b4","Type":"ContainerDied","Data":"647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413"} Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.447788 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwctl" event={"ID":"70eae5f9-cb89-41bb-8da1-6665c07a74b4","Type":"ContainerDied","Data":"959501709a5e14bd6c5a7388ddd0f7cb097d67767f02bd61f5e991d6f9721615"} Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.447817 5034 scope.go:117] "RemoveContainer" containerID="647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.448052 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwctl" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.487895 5034 scope.go:117] "RemoveContainer" containerID="cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.494205 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwctl"] Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.500837 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwctl"] Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.536142 5034 scope.go:117] "RemoveContainer" containerID="2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.571474 5034 scope.go:117] "RemoveContainer" containerID="647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413" Jan 05 23:34:02 crc kubenswrapper[5034]: E0105 23:34:02.572113 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413\": container with ID starting with 647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413 not found: ID does not exist" containerID="647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.572162 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413"} err="failed to get container status \"647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413\": rpc error: code = NotFound desc = could not find container \"647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413\": container with ID starting with 647cabdc65df2ca70b5ce011ff28c84b2e8f740a75c080b4bd41cc1047197413 not found: ID does not exist" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.572193 5034 scope.go:117] "RemoveContainer" containerID="cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d" Jan 05 23:34:02 crc kubenswrapper[5034]: E0105 23:34:02.572930 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d\": container with ID starting with cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d not found: ID does not exist" containerID="cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.572968 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d"} err="failed to get container status \"cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d\": rpc error: code = NotFound desc = could not find container \"cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d\": container with ID starting with cca8860194438365e7c9f9335d3f2d19b13e45250cf5b2d00ae0fc2f301f3d8d not found: ID does not exist" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.572989 5034 scope.go:117] "RemoveContainer" containerID="2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d" Jan 05 23:34:02 crc kubenswrapper[5034]: E0105 23:34:02.573319 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d\": container with ID starting with 2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d not found: ID does not exist" containerID="2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d" Jan 05 23:34:02 crc kubenswrapper[5034]: I0105 23:34:02.573373 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d"} err="failed to get container status \"2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d\": rpc error: code = NotFound desc = could not find container \"2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d\": container with ID starting with 2c026e75985236d9f81aeaf635150291120303f5b6ee34823c81b46ed667e45d not found: ID does not exist" Jan 05 23:34:03 crc kubenswrapper[5034]: I0105 23:34:03.861006 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" path="/var/lib/kubelet/pods/70eae5f9-cb89-41bb-8da1-6665c07a74b4/volumes" Jan 05 23:34:06 crc kubenswrapper[5034]: I0105 23:34:06.218842 5034 scope.go:117] "RemoveContainer" containerID="a9d9bf337abb9c9fe4c32b360dec69ca24df71a9ab68a93143913aefadcab6aa" Jan 05 23:34:06 crc kubenswrapper[5034]: I0105 23:34:06.274330 5034 scope.go:117] "RemoveContainer" containerID="4cd0e45466d381cb45f9a6e10b33dcc2103bde34ddad1318b2b070a8b7ecf005" Jan 05 23:34:06 crc kubenswrapper[5034]: I0105 23:34:06.317323 5034 scope.go:117] "RemoveContainer" containerID="77bbee72d22f20d5c6624da7132b7d2b3af3d3fbd5a7de19dd7f29f910ee3ef1" Jan 05 23:34:06 crc kubenswrapper[5034]: I0105 23:34:06.370857 5034 scope.go:117] "RemoveContainer" containerID="28023b5ae9f5ff89b6d4e0e64195584e0174132239ed53308e4d703ab907a48f" Jan 05 23:34:06 crc kubenswrapper[5034]: I0105 23:34:06.444168 5034 scope.go:117] "RemoveContainer" containerID="525b566f1e35e56f23580f1607e00e020f86aa5c4d9a03c9c828fccf88561349" Jan 05 23:34:06 crc kubenswrapper[5034]: I0105 23:34:06.479006 5034 scope.go:117] "RemoveContainer" containerID="6f159d85178c914b7f3db80ef61c2de61175c338db4d4773a8139adc8e22f91e" Jan 05 23:34:12 crc kubenswrapper[5034]: I0105 23:34:12.033122 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:34:12 crc kubenswrapper[5034]: I0105 23:34:12.169344 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:34:13 crc kubenswrapper[5034]: I0105 23:34:13.742359 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:34:13 crc kubenswrapper[5034]: I0105 23:34:13.998749 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:34:14 crc kubenswrapper[5034]: I0105 23:34:14.083044 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bf57b7474-sp5s9"] Jan 05 23:34:14 crc kubenswrapper[5034]: I0105 23:34:14.575002 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bf57b7474-sp5s9" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon-log" containerID="cri-o://f0524a9770bf3e5791e8fe4c07737aeea0773d857d16fda09849f22e19c9bc7e" gracePeriod=30 Jan 05 23:34:14 crc kubenswrapper[5034]: I0105 23:34:14.575102 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bf57b7474-sp5s9" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon" containerID="cri-o://afb1ccf19a436dff6531f5c50da289ec669db82f0b2dc659dcfdcc987654717d" gracePeriod=30 Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.611521 5034 generic.go:334] "Generic (PLEG): container finished" podID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerID="afb1ccf19a436dff6531f5c50da289ec669db82f0b2dc659dcfdcc987654717d" exitCode=0 Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.612830 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf57b7474-sp5s9" event={"ID":"0ea2d3fc-193c-497f-9d06-42f9902c818e","Type":"ContainerDied","Data":"afb1ccf19a436dff6531f5c50da289ec669db82f0b2dc659dcfdcc987654717d"} Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.614758 5034 generic.go:334] "Generic (PLEG): container finished" podID="b391b390-6f44-4bbc-b444-f504511bf7aa" containerID="3c9a31fdd84df109c291954a36aec019e049367ccb73bb8f644f1ed677b5f7b8" exitCode=137 Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.614841 5034 generic.go:334] "Generic (PLEG): container finished" podID="b391b390-6f44-4bbc-b444-f504511bf7aa" containerID="30aff88540cdc3a8d2c5815292118b034cfac20451cdd29999b44af862a3c3e7" exitCode=137 Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.614919 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66dbcfc7c-6qsh2" event={"ID":"b391b390-6f44-4bbc-b444-f504511bf7aa","Type":"ContainerDied","Data":"3c9a31fdd84df109c291954a36aec019e049367ccb73bb8f644f1ed677b5f7b8"} Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.614980 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66dbcfc7c-6qsh2" event={"ID":"b391b390-6f44-4bbc-b444-f504511bf7aa","Type":"ContainerDied","Data":"30aff88540cdc3a8d2c5815292118b034cfac20451cdd29999b44af862a3c3e7"} Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.615034 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66dbcfc7c-6qsh2" event={"ID":"b391b390-6f44-4bbc-b444-f504511bf7aa","Type":"ContainerDied","Data":"e438a4ae854b7904a6ca6a365b2944fecfe368b317679feff63bc749f444313a"} Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.615109 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e438a4ae854b7904a6ca6a365b2944fecfe368b317679feff63bc749f444313a" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.617333 5034 generic.go:334] "Generic (PLEG): container finished" podID="a59d5fe8-ae40-46e7-8d53-86d6facef712" containerID="84b88905f30f9f29c945ad6b42f6bcc84a04487da3a146821bebde81496910dd" exitCode=137 Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.617363 5034 generic.go:334] "Generic (PLEG): container finished" podID="a59d5fe8-ae40-46e7-8d53-86d6facef712" containerID="638c5ff766acd63968bfe7c51957902177e214312f745bb76a28552afff017af" exitCode=137 Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.617387 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78c77cf66f-ld7mf" event={"ID":"a59d5fe8-ae40-46e7-8d53-86d6facef712","Type":"ContainerDied","Data":"84b88905f30f9f29c945ad6b42f6bcc84a04487da3a146821bebde81496910dd"} Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.617418 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78c77cf66f-ld7mf" event={"ID":"a59d5fe8-ae40-46e7-8d53-86d6facef712","Type":"ContainerDied","Data":"638c5ff766acd63968bfe7c51957902177e214312f745bb76a28552afff017af"} Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.617431 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78c77cf66f-ld7mf" event={"ID":"a59d5fe8-ae40-46e7-8d53-86d6facef712","Type":"ContainerDied","Data":"bf6684ecf5d630c9a9230291739628e3f15096b4d585977eaf419f047059f475"} Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.617440 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6684ecf5d630c9a9230291739628e3f15096b4d585977eaf419f047059f475" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.645136 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.652313 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.792949 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-scripts\") pod \"a59d5fe8-ae40-46e7-8d53-86d6facef712\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793146 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b391b390-6f44-4bbc-b444-f504511bf7aa-horizon-secret-key\") pod \"b391b390-6f44-4bbc-b444-f504511bf7aa\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793186 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a59d5fe8-ae40-46e7-8d53-86d6facef712-horizon-secret-key\") pod \"a59d5fe8-ae40-46e7-8d53-86d6facef712\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793220 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-scripts\") pod \"b391b390-6f44-4bbc-b444-f504511bf7aa\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793250 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b391b390-6f44-4bbc-b444-f504511bf7aa-logs\") pod \"b391b390-6f44-4bbc-b444-f504511bf7aa\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793301 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59p8x\" (UniqueName: \"kubernetes.io/projected/a59d5fe8-ae40-46e7-8d53-86d6facef712-kube-api-access-59p8x\") pod \"a59d5fe8-ae40-46e7-8d53-86d6facef712\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793383 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a59d5fe8-ae40-46e7-8d53-86d6facef712-logs\") pod \"a59d5fe8-ae40-46e7-8d53-86d6facef712\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793505 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-config-data\") pod \"a59d5fe8-ae40-46e7-8d53-86d6facef712\" (UID: \"a59d5fe8-ae40-46e7-8d53-86d6facef712\") " Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793614 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgp2k\" (UniqueName: \"kubernetes.io/projected/b391b390-6f44-4bbc-b444-f504511bf7aa-kube-api-access-qgp2k\") pod \"b391b390-6f44-4bbc-b444-f504511bf7aa\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793648 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-config-data\") pod \"b391b390-6f44-4bbc-b444-f504511bf7aa\" (UID: \"b391b390-6f44-4bbc-b444-f504511bf7aa\") " Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793953 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b391b390-6f44-4bbc-b444-f504511bf7aa-logs" (OuterVolumeSpecName: "logs") pod "b391b390-6f44-4bbc-b444-f504511bf7aa" (UID: "b391b390-6f44-4bbc-b444-f504511bf7aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.793975 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59d5fe8-ae40-46e7-8d53-86d6facef712-logs" (OuterVolumeSpecName: "logs") pod "a59d5fe8-ae40-46e7-8d53-86d6facef712" (UID: "a59d5fe8-ae40-46e7-8d53-86d6facef712"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.794259 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b391b390-6f44-4bbc-b444-f504511bf7aa-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.794280 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a59d5fe8-ae40-46e7-8d53-86d6facef712-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.798945 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b391b390-6f44-4bbc-b444-f504511bf7aa-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b391b390-6f44-4bbc-b444-f504511bf7aa" (UID: "b391b390-6f44-4bbc-b444-f504511bf7aa"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.809256 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59d5fe8-ae40-46e7-8d53-86d6facef712-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a59d5fe8-ae40-46e7-8d53-86d6facef712" (UID: "a59d5fe8-ae40-46e7-8d53-86d6facef712"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.809292 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59d5fe8-ae40-46e7-8d53-86d6facef712-kube-api-access-59p8x" (OuterVolumeSpecName: "kube-api-access-59p8x") pod "a59d5fe8-ae40-46e7-8d53-86d6facef712" (UID: "a59d5fe8-ae40-46e7-8d53-86d6facef712"). InnerVolumeSpecName "kube-api-access-59p8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.809338 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b391b390-6f44-4bbc-b444-f504511bf7aa-kube-api-access-qgp2k" (OuterVolumeSpecName: "kube-api-access-qgp2k") pod "b391b390-6f44-4bbc-b444-f504511bf7aa" (UID: "b391b390-6f44-4bbc-b444-f504511bf7aa"). InnerVolumeSpecName "kube-api-access-qgp2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.822792 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-scripts" (OuterVolumeSpecName: "scripts") pod "a59d5fe8-ae40-46e7-8d53-86d6facef712" (UID: "a59d5fe8-ae40-46e7-8d53-86d6facef712"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.823026 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-scripts" (OuterVolumeSpecName: "scripts") pod "b391b390-6f44-4bbc-b444-f504511bf7aa" (UID: "b391b390-6f44-4bbc-b444-f504511bf7aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.823491 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-config-data" (OuterVolumeSpecName: "config-data") pod "a59d5fe8-ae40-46e7-8d53-86d6facef712" (UID: "a59d5fe8-ae40-46e7-8d53-86d6facef712"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.826110 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-config-data" (OuterVolumeSpecName: "config-data") pod "b391b390-6f44-4bbc-b444-f504511bf7aa" (UID: "b391b390-6f44-4bbc-b444-f504511bf7aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.897071 5034 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b391b390-6f44-4bbc-b444-f504511bf7aa-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.897275 5034 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a59d5fe8-ae40-46e7-8d53-86d6facef712-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.897316 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.897374 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59p8x\" (UniqueName: \"kubernetes.io/projected/a59d5fe8-ae40-46e7-8d53-86d6facef712-kube-api-access-59p8x\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.897815 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.897833 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgp2k\" (UniqueName: \"kubernetes.io/projected/b391b390-6f44-4bbc-b444-f504511bf7aa-kube-api-access-qgp2k\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.897843 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b391b390-6f44-4bbc-b444-f504511bf7aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:18 crc kubenswrapper[5034]: I0105 23:34:18.897852 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59d5fe8-ae40-46e7-8d53-86d6facef712-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:19 crc kubenswrapper[5034]: I0105 23:34:19.626520 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66dbcfc7c-6qsh2" Jan 05 23:34:19 crc kubenswrapper[5034]: I0105 23:34:19.626565 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78c77cf66f-ld7mf" Jan 05 23:34:19 crc kubenswrapper[5034]: I0105 23:34:19.677239 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66dbcfc7c-6qsh2"] Jan 05 23:34:19 crc kubenswrapper[5034]: I0105 23:34:19.687168 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66dbcfc7c-6qsh2"] Jan 05 23:34:19 crc kubenswrapper[5034]: I0105 23:34:19.696350 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78c77cf66f-ld7mf"] Jan 05 23:34:19 crc kubenswrapper[5034]: I0105 23:34:19.728821 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78c77cf66f-ld7mf"] Jan 05 23:34:19 crc kubenswrapper[5034]: I0105 23:34:19.849988 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59d5fe8-ae40-46e7-8d53-86d6facef712" path="/var/lib/kubelet/pods/a59d5fe8-ae40-46e7-8d53-86d6facef712/volumes" Jan 05 23:34:19 crc kubenswrapper[5034]: I0105 23:34:19.851053 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b391b390-6f44-4bbc-b444-f504511bf7aa" path="/var/lib/kubelet/pods/b391b390-6f44-4bbc-b444-f504511bf7aa/volumes" Jan 05 23:34:20 crc kubenswrapper[5034]: I0105 23:34:20.189722 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7bf57b7474-sp5s9" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.115:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8443: connect: connection refused" Jan 05 23:34:20 crc kubenswrapper[5034]: I0105 23:34:20.469548 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:34:20 crc kubenswrapper[5034]: I0105 23:34:20.469617 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:34:20 crc kubenswrapper[5034]: I0105 23:34:20.469666 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 23:34:20 crc kubenswrapper[5034]: I0105 23:34:20.470634 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a88a5134ff25bff3380251394560c9cbca0838a1161bcde80ce38bf8d4b764a1"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 23:34:20 crc kubenswrapper[5034]: I0105 23:34:20.470701 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://a88a5134ff25bff3380251394560c9cbca0838a1161bcde80ce38bf8d4b764a1" gracePeriod=600 Jan 05 23:34:20 crc kubenswrapper[5034]: I0105 23:34:20.653752 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="a88a5134ff25bff3380251394560c9cbca0838a1161bcde80ce38bf8d4b764a1" exitCode=0 Jan 05 23:34:20 crc kubenswrapper[5034]: I0105 23:34:20.653808 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"a88a5134ff25bff3380251394560c9cbca0838a1161bcde80ce38bf8d4b764a1"} Jan 05 23:34:20 crc kubenswrapper[5034]: I0105 23:34:20.653854 5034 scope.go:117] "RemoveContainer" containerID="904ee165acd68aec0a420207d90877be6e72590e164991bf8b342c985484a2ef" Jan 05 23:34:21 crc kubenswrapper[5034]: I0105 23:34:21.671176 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16"} Jan 05 23:34:30 crc kubenswrapper[5034]: I0105 23:34:30.189712 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7bf57b7474-sp5s9" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.115:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8443: connect: connection refused" Jan 05 23:34:40 crc kubenswrapper[5034]: I0105 23:34:40.189447 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7bf57b7474-sp5s9" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.115:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8443: connect: connection refused" Jan 05 23:34:40 crc kubenswrapper[5034]: I0105 23:34:40.190129 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:34:44 crc kubenswrapper[5034]: I0105 23:34:44.942041 5034 generic.go:334] "Generic (PLEG): container finished" podID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerID="f0524a9770bf3e5791e8fe4c07737aeea0773d857d16fda09849f22e19c9bc7e" exitCode=137 Jan 05 23:34:44 crc kubenswrapper[5034]: I0105 23:34:44.942119 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf57b7474-sp5s9" event={"ID":"0ea2d3fc-193c-497f-9d06-42f9902c818e","Type":"ContainerDied","Data":"f0524a9770bf3e5791e8fe4c07737aeea0773d857d16fda09849f22e19c9bc7e"} Jan 05 23:34:44 crc kubenswrapper[5034]: I0105 23:34:44.942640 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf57b7474-sp5s9" event={"ID":"0ea2d3fc-193c-497f-9d06-42f9902c818e","Type":"ContainerDied","Data":"6ff1936fdd08592bfe8d0a0a58ff40e21bf9ee2eaca59ef895467762bbc1878b"} Jan 05 23:34:44 crc kubenswrapper[5034]: I0105 23:34:44.942661 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ff1936fdd08592bfe8d0a0a58ff40e21bf9ee2eaca59ef895467762bbc1878b" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.033678 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.143476 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea2d3fc-193c-497f-9d06-42f9902c818e-logs\") pod \"0ea2d3fc-193c-497f-9d06-42f9902c818e\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.143850 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-tls-certs\") pod \"0ea2d3fc-193c-497f-9d06-42f9902c818e\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.143899 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-combined-ca-bundle\") pod \"0ea2d3fc-193c-497f-9d06-42f9902c818e\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.143982 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-scripts\") pod \"0ea2d3fc-193c-497f-9d06-42f9902c818e\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.143982 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ea2d3fc-193c-497f-9d06-42f9902c818e-logs" (OuterVolumeSpecName: "logs") pod "0ea2d3fc-193c-497f-9d06-42f9902c818e" (UID: "0ea2d3fc-193c-497f-9d06-42f9902c818e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.144016 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-secret-key\") pod \"0ea2d3fc-193c-497f-9d06-42f9902c818e\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.144115 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-config-data\") pod \"0ea2d3fc-193c-497f-9d06-42f9902c818e\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.144169 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4rdm\" (UniqueName: \"kubernetes.io/projected/0ea2d3fc-193c-497f-9d06-42f9902c818e-kube-api-access-t4rdm\") pod \"0ea2d3fc-193c-497f-9d06-42f9902c818e\" (UID: \"0ea2d3fc-193c-497f-9d06-42f9902c818e\") " Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.144719 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea2d3fc-193c-497f-9d06-42f9902c818e-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.149985 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea2d3fc-193c-497f-9d06-42f9902c818e-kube-api-access-t4rdm" (OuterVolumeSpecName: "kube-api-access-t4rdm") pod "0ea2d3fc-193c-497f-9d06-42f9902c818e" (UID: "0ea2d3fc-193c-497f-9d06-42f9902c818e"). InnerVolumeSpecName "kube-api-access-t4rdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.175299 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0ea2d3fc-193c-497f-9d06-42f9902c818e" (UID: "0ea2d3fc-193c-497f-9d06-42f9902c818e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.177163 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-scripts" (OuterVolumeSpecName: "scripts") pod "0ea2d3fc-193c-497f-9d06-42f9902c818e" (UID: "0ea2d3fc-193c-497f-9d06-42f9902c818e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.180608 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-config-data" (OuterVolumeSpecName: "config-data") pod "0ea2d3fc-193c-497f-9d06-42f9902c818e" (UID: "0ea2d3fc-193c-497f-9d06-42f9902c818e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.187741 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ea2d3fc-193c-497f-9d06-42f9902c818e" (UID: "0ea2d3fc-193c-497f-9d06-42f9902c818e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.206613 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "0ea2d3fc-193c-497f-9d06-42f9902c818e" (UID: "0ea2d3fc-193c-497f-9d06-42f9902c818e"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.247177 5034 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.247218 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.247229 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.247239 5034 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0ea2d3fc-193c-497f-9d06-42f9902c818e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.247248 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ea2d3fc-193c-497f-9d06-42f9902c818e-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.247257 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4rdm\" (UniqueName: \"kubernetes.io/projected/0ea2d3fc-193c-497f-9d06-42f9902c818e-kube-api-access-t4rdm\") on node \"crc\" DevicePath \"\"" Jan 05 23:34:45 crc kubenswrapper[5034]: I0105 23:34:45.950430 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bf57b7474-sp5s9" Jan 05 23:34:46 crc kubenswrapper[5034]: I0105 23:34:46.007929 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bf57b7474-sp5s9"] Jan 05 23:34:46 crc kubenswrapper[5034]: I0105 23:34:46.018177 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bf57b7474-sp5s9"] Jan 05 23:34:47 crc kubenswrapper[5034]: I0105 23:34:47.852684 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" path="/var/lib/kubelet/pods/0ea2d3fc-193c-497f-9d06-42f9902c818e/volumes" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.633061 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d7cbf459d-njclw"] Jan 05 23:34:55 crc kubenswrapper[5034]: E0105 23:34:55.634143 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b391b390-6f44-4bbc-b444-f504511bf7aa" containerName="horizon" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634162 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="b391b390-6f44-4bbc-b444-f504511bf7aa" containerName="horizon" Jan 05 23:34:55 crc kubenswrapper[5034]: E0105 23:34:55.634179 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59d5fe8-ae40-46e7-8d53-86d6facef712" containerName="horizon-log" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634187 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59d5fe8-ae40-46e7-8d53-86d6facef712" containerName="horizon-log" Jan 05 23:34:55 crc kubenswrapper[5034]: E0105 23:34:55.634204 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b391b390-6f44-4bbc-b444-f504511bf7aa" containerName="horizon-log" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634214 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="b391b390-6f44-4bbc-b444-f504511bf7aa" containerName="horizon-log" Jan 05 23:34:55 crc kubenswrapper[5034]: E0105 23:34:55.634232 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634240 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon" Jan 05 23:34:55 crc kubenswrapper[5034]: E0105 23:34:55.634256 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59d5fe8-ae40-46e7-8d53-86d6facef712" containerName="horizon" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634264 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59d5fe8-ae40-46e7-8d53-86d6facef712" containerName="horizon" Jan 05 23:34:55 crc kubenswrapper[5034]: E0105 23:34:55.634279 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerName="extract-utilities" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634287 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerName="extract-utilities" Jan 05 23:34:55 crc kubenswrapper[5034]: E0105 23:34:55.634314 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon-log" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634320 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon-log" Jan 05 23:34:55 crc kubenswrapper[5034]: E0105 23:34:55.634334 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerName="registry-server" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634340 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerName="registry-server" Jan 05 23:34:55 crc kubenswrapper[5034]: E0105 23:34:55.634352 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerName="extract-content" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634358 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerName="extract-content" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634581 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59d5fe8-ae40-46e7-8d53-86d6facef712" containerName="horizon" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634595 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="b391b390-6f44-4bbc-b444-f504511bf7aa" containerName="horizon-log" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634609 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634621 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59d5fe8-ae40-46e7-8d53-86d6facef712" containerName="horizon-log" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634634 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="b391b390-6f44-4bbc-b444-f504511bf7aa" containerName="horizon" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634646 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="70eae5f9-cb89-41bb-8da1-6665c07a74b4" containerName="registry-server" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.634658 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea2d3fc-193c-497f-9d06-42f9902c818e" containerName="horizon-log" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.636196 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.652152 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d7cbf459d-njclw"] Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.809632 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aac7508-cac0-47ed-8636-25c715c0b8b9-scripts\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.809692 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aac7508-cac0-47ed-8636-25c715c0b8b9-horizon-tls-certs\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.809747 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aac7508-cac0-47ed-8636-25c715c0b8b9-combined-ca-bundle\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.809794 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aac7508-cac0-47ed-8636-25c715c0b8b9-config-data\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.809838 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aac7508-cac0-47ed-8636-25c715c0b8b9-logs\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.809867 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3aac7508-cac0-47ed-8636-25c715c0b8b9-horizon-secret-key\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.809891 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcxbx\" (UniqueName: \"kubernetes.io/projected/3aac7508-cac0-47ed-8636-25c715c0b8b9-kube-api-access-kcxbx\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.911921 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aac7508-cac0-47ed-8636-25c715c0b8b9-scripts\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.911982 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aac7508-cac0-47ed-8636-25c715c0b8b9-horizon-tls-certs\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.912027 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aac7508-cac0-47ed-8636-25c715c0b8b9-combined-ca-bundle\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.912105 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aac7508-cac0-47ed-8636-25c715c0b8b9-config-data\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.912174 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aac7508-cac0-47ed-8636-25c715c0b8b9-logs\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.912220 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3aac7508-cac0-47ed-8636-25c715c0b8b9-horizon-secret-key\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.912267 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcxbx\" (UniqueName: \"kubernetes.io/projected/3aac7508-cac0-47ed-8636-25c715c0b8b9-kube-api-access-kcxbx\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.913347 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aac7508-cac0-47ed-8636-25c715c0b8b9-logs\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.914132 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aac7508-cac0-47ed-8636-25c715c0b8b9-scripts\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.914161 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aac7508-cac0-47ed-8636-25c715c0b8b9-config-data\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.933950 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aac7508-cac0-47ed-8636-25c715c0b8b9-horizon-tls-certs\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.934476 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aac7508-cac0-47ed-8636-25c715c0b8b9-combined-ca-bundle\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.934694 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3aac7508-cac0-47ed-8636-25c715c0b8b9-horizon-secret-key\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.940371 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcxbx\" (UniqueName: \"kubernetes.io/projected/3aac7508-cac0-47ed-8636-25c715c0b8b9-kube-api-access-kcxbx\") pod \"horizon-d7cbf459d-njclw\" (UID: \"3aac7508-cac0-47ed-8636-25c715c0b8b9\") " pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:55 crc kubenswrapper[5034]: I0105 23:34:55.969396 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:34:56 crc kubenswrapper[5034]: I0105 23:34:56.484375 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d7cbf459d-njclw"] Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.093009 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d7cbf459d-njclw" event={"ID":"3aac7508-cac0-47ed-8636-25c715c0b8b9","Type":"ContainerStarted","Data":"ff7987bdf6574c6c7c980c336fe6b05c73f5c628efb0db3e0d26783577975050"} Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.093485 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d7cbf459d-njclw" event={"ID":"3aac7508-cac0-47ed-8636-25c715c0b8b9","Type":"ContainerStarted","Data":"dd07ca4448c58fc449b0b8079d0bbd9d3bf6dc145865d845a0f86df86a8ff703"} Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.093497 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d7cbf459d-njclw" event={"ID":"3aac7508-cac0-47ed-8636-25c715c0b8b9","Type":"ContainerStarted","Data":"ad775479611fa64151040e23cb03a346809e433f8ceb105881e6b8ad13106a65"} Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.121936 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-d7cbf459d-njclw" podStartSLOduration=2.12191285 podStartE2EDuration="2.12191285s" podCreationTimestamp="2026-01-05 23:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:34:57.119057699 +0000 UTC m=+6189.491057148" watchObservedRunningTime="2026-01-05 23:34:57.12191285 +0000 UTC m=+6189.493912289" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.245352 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-5zqpd"] Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.248286 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5zqpd" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.285511 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5zqpd"] Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.319124 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-d63d-account-create-update-ddp5l"] Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.320836 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d63d-account-create-update-ddp5l" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.327457 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.336614 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-d63d-account-create-update-ddp5l"] Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.362728 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-operator-scripts\") pod \"heat-db-create-5zqpd\" (UID: \"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4\") " pod="openstack/heat-db-create-5zqpd" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.363069 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf6fc\" (UniqueName: \"kubernetes.io/projected/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-kube-api-access-nf6fc\") pod \"heat-db-create-5zqpd\" (UID: \"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4\") " pod="openstack/heat-db-create-5zqpd" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.465059 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf6fc\" (UniqueName: \"kubernetes.io/projected/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-kube-api-access-nf6fc\") pod \"heat-db-create-5zqpd\" (UID: \"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4\") " pod="openstack/heat-db-create-5zqpd" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.465220 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6d9\" (UniqueName: \"kubernetes.io/projected/1b977528-7379-4e3d-b770-31df686e4fdc-kube-api-access-zm6d9\") pod \"heat-d63d-account-create-update-ddp5l\" (UID: \"1b977528-7379-4e3d-b770-31df686e4fdc\") " pod="openstack/heat-d63d-account-create-update-ddp5l" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.465367 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-operator-scripts\") pod \"heat-db-create-5zqpd\" (UID: \"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4\") " pod="openstack/heat-db-create-5zqpd" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.465418 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b977528-7379-4e3d-b770-31df686e4fdc-operator-scripts\") pod \"heat-d63d-account-create-update-ddp5l\" (UID: \"1b977528-7379-4e3d-b770-31df686e4fdc\") " pod="openstack/heat-d63d-account-create-update-ddp5l" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.466427 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-operator-scripts\") pod \"heat-db-create-5zqpd\" (UID: \"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4\") " pod="openstack/heat-db-create-5zqpd" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.493839 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf6fc\" (UniqueName: \"kubernetes.io/projected/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-kube-api-access-nf6fc\") pod \"heat-db-create-5zqpd\" (UID: \"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4\") " pod="openstack/heat-db-create-5zqpd" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.567521 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b977528-7379-4e3d-b770-31df686e4fdc-operator-scripts\") pod \"heat-d63d-account-create-update-ddp5l\" (UID: \"1b977528-7379-4e3d-b770-31df686e4fdc\") " pod="openstack/heat-d63d-account-create-update-ddp5l" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.567693 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm6d9\" (UniqueName: \"kubernetes.io/projected/1b977528-7379-4e3d-b770-31df686e4fdc-kube-api-access-zm6d9\") pod \"heat-d63d-account-create-update-ddp5l\" (UID: \"1b977528-7379-4e3d-b770-31df686e4fdc\") " pod="openstack/heat-d63d-account-create-update-ddp5l" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.568583 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b977528-7379-4e3d-b770-31df686e4fdc-operator-scripts\") pod \"heat-d63d-account-create-update-ddp5l\" (UID: \"1b977528-7379-4e3d-b770-31df686e4fdc\") " pod="openstack/heat-d63d-account-create-update-ddp5l" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.587363 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm6d9\" (UniqueName: \"kubernetes.io/projected/1b977528-7379-4e3d-b770-31df686e4fdc-kube-api-access-zm6d9\") pod \"heat-d63d-account-create-update-ddp5l\" (UID: \"1b977528-7379-4e3d-b770-31df686e4fdc\") " pod="openstack/heat-d63d-account-create-update-ddp5l" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.593685 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5zqpd" Jan 05 23:34:57 crc kubenswrapper[5034]: I0105 23:34:57.650895 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d63d-account-create-update-ddp5l" Jan 05 23:34:58 crc kubenswrapper[5034]: I0105 23:34:58.226726 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-d63d-account-create-update-ddp5l"] Jan 05 23:34:58 crc kubenswrapper[5034]: W0105 23:34:58.320406 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod591b10b9_dfcd_46d4_818f_dcfb0fea7ef4.slice/crio-45e3a67790ed843f2717570791f8235c590d41b6f7095f8280e932ac20b3e464 WatchSource:0}: Error finding container 45e3a67790ed843f2717570791f8235c590d41b6f7095f8280e932ac20b3e464: Status 404 returned error can't find the container with id 45e3a67790ed843f2717570791f8235c590d41b6f7095f8280e932ac20b3e464 Jan 05 23:34:58 crc kubenswrapper[5034]: I0105 23:34:58.323932 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5zqpd"] Jan 05 23:34:59 crc kubenswrapper[5034]: I0105 23:34:59.115646 5034 generic.go:334] "Generic (PLEG): container finished" podID="591b10b9-dfcd-46d4-818f-dcfb0fea7ef4" containerID="b73044afed82a0f6a824a956e46818b6bdc705af5f1c0c2d38d82078551503ef" exitCode=0 Jan 05 23:34:59 crc kubenswrapper[5034]: I0105 23:34:59.115741 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5zqpd" event={"ID":"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4","Type":"ContainerDied","Data":"b73044afed82a0f6a824a956e46818b6bdc705af5f1c0c2d38d82078551503ef"} Jan 05 23:34:59 crc kubenswrapper[5034]: I0105 23:34:59.116245 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5zqpd" event={"ID":"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4","Type":"ContainerStarted","Data":"45e3a67790ed843f2717570791f8235c590d41b6f7095f8280e932ac20b3e464"} Jan 05 23:34:59 crc kubenswrapper[5034]: I0105 23:34:59.120400 5034 generic.go:334] "Generic (PLEG): container finished" podID="1b977528-7379-4e3d-b770-31df686e4fdc" containerID="74daa58ceb606ae0a82b66318bcd8f40ece915832c76efaec898bbf1f2cbec42" exitCode=0 Jan 05 23:34:59 crc kubenswrapper[5034]: I0105 23:34:59.120433 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-d63d-account-create-update-ddp5l" event={"ID":"1b977528-7379-4e3d-b770-31df686e4fdc","Type":"ContainerDied","Data":"74daa58ceb606ae0a82b66318bcd8f40ece915832c76efaec898bbf1f2cbec42"} Jan 05 23:34:59 crc kubenswrapper[5034]: I0105 23:34:59.120458 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-d63d-account-create-update-ddp5l" event={"ID":"1b977528-7379-4e3d-b770-31df686e4fdc","Type":"ContainerStarted","Data":"40316bd2c28cdd953aa8a14295cf294840cc9d087280b3d70be234ddd646af49"} Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.606230 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5zqpd" Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.611833 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d63d-account-create-update-ddp5l" Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.746003 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf6fc\" (UniqueName: \"kubernetes.io/projected/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-kube-api-access-nf6fc\") pod \"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4\" (UID: \"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4\") " Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.746609 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b977528-7379-4e3d-b770-31df686e4fdc-operator-scripts\") pod \"1b977528-7379-4e3d-b770-31df686e4fdc\" (UID: \"1b977528-7379-4e3d-b770-31df686e4fdc\") " Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.746723 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-operator-scripts\") pod \"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4\" (UID: \"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4\") " Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.746892 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm6d9\" (UniqueName: \"kubernetes.io/projected/1b977528-7379-4e3d-b770-31df686e4fdc-kube-api-access-zm6d9\") pod \"1b977528-7379-4e3d-b770-31df686e4fdc\" (UID: \"1b977528-7379-4e3d-b770-31df686e4fdc\") " Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.747029 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "591b10b9-dfcd-46d4-818f-dcfb0fea7ef4" (UID: "591b10b9-dfcd-46d4-818f-dcfb0fea7ef4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.747138 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b977528-7379-4e3d-b770-31df686e4fdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b977528-7379-4e3d-b770-31df686e4fdc" (UID: "1b977528-7379-4e3d-b770-31df686e4fdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.749308 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b977528-7379-4e3d-b770-31df686e4fdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.749426 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.751924 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b977528-7379-4e3d-b770-31df686e4fdc-kube-api-access-zm6d9" (OuterVolumeSpecName: "kube-api-access-zm6d9") pod "1b977528-7379-4e3d-b770-31df686e4fdc" (UID: "1b977528-7379-4e3d-b770-31df686e4fdc"). InnerVolumeSpecName "kube-api-access-zm6d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.752747 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-kube-api-access-nf6fc" (OuterVolumeSpecName: "kube-api-access-nf6fc") pod "591b10b9-dfcd-46d4-818f-dcfb0fea7ef4" (UID: "591b10b9-dfcd-46d4-818f-dcfb0fea7ef4"). InnerVolumeSpecName "kube-api-access-nf6fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.851583 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf6fc\" (UniqueName: \"kubernetes.io/projected/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4-kube-api-access-nf6fc\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:00 crc kubenswrapper[5034]: I0105 23:35:00.851622 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm6d9\" (UniqueName: \"kubernetes.io/projected/1b977528-7379-4e3d-b770-31df686e4fdc-kube-api-access-zm6d9\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:01 crc kubenswrapper[5034]: I0105 23:35:01.139385 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d63d-account-create-update-ddp5l" Jan 05 23:35:01 crc kubenswrapper[5034]: I0105 23:35:01.139411 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-d63d-account-create-update-ddp5l" event={"ID":"1b977528-7379-4e3d-b770-31df686e4fdc","Type":"ContainerDied","Data":"40316bd2c28cdd953aa8a14295cf294840cc9d087280b3d70be234ddd646af49"} Jan 05 23:35:01 crc kubenswrapper[5034]: I0105 23:35:01.139889 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40316bd2c28cdd953aa8a14295cf294840cc9d087280b3d70be234ddd646af49" Jan 05 23:35:01 crc kubenswrapper[5034]: I0105 23:35:01.141675 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5zqpd" event={"ID":"591b10b9-dfcd-46d4-818f-dcfb0fea7ef4","Type":"ContainerDied","Data":"45e3a67790ed843f2717570791f8235c590d41b6f7095f8280e932ac20b3e464"} Jan 05 23:35:01 crc kubenswrapper[5034]: I0105 23:35:01.141725 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5zqpd" Jan 05 23:35:01 crc kubenswrapper[5034]: I0105 23:35:01.141744 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45e3a67790ed843f2717570791f8235c590d41b6f7095f8280e932ac20b3e464" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.446005 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-s7t6l"] Jan 05 23:35:02 crc kubenswrapper[5034]: E0105 23:35:02.446817 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591b10b9-dfcd-46d4-818f-dcfb0fea7ef4" containerName="mariadb-database-create" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.446833 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="591b10b9-dfcd-46d4-818f-dcfb0fea7ef4" containerName="mariadb-database-create" Jan 05 23:35:02 crc kubenswrapper[5034]: E0105 23:35:02.446852 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b977528-7379-4e3d-b770-31df686e4fdc" containerName="mariadb-account-create-update" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.446858 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b977528-7379-4e3d-b770-31df686e4fdc" containerName="mariadb-account-create-update" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.447064 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="591b10b9-dfcd-46d4-818f-dcfb0fea7ef4" containerName="mariadb-database-create" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.447106 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b977528-7379-4e3d-b770-31df686e4fdc" containerName="mariadb-account-create-update" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.447935 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.450670 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-9vsgg" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.450877 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.457774 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-s7t6l"] Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.596499 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz82n\" (UniqueName: \"kubernetes.io/projected/999257cc-1aca-404f-834c-ddb12373b69e-kube-api-access-gz82n\") pod \"heat-db-sync-s7t6l\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.596596 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-config-data\") pod \"heat-db-sync-s7t6l\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.596659 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-combined-ca-bundle\") pod \"heat-db-sync-s7t6l\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.699027 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz82n\" (UniqueName: \"kubernetes.io/projected/999257cc-1aca-404f-834c-ddb12373b69e-kube-api-access-gz82n\") pod \"heat-db-sync-s7t6l\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.699501 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-config-data\") pod \"heat-db-sync-s7t6l\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.700326 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-combined-ca-bundle\") pod \"heat-db-sync-s7t6l\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.705758 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-combined-ca-bundle\") pod \"heat-db-sync-s7t6l\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.708695 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-config-data\") pod \"heat-db-sync-s7t6l\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.720673 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz82n\" (UniqueName: \"kubernetes.io/projected/999257cc-1aca-404f-834c-ddb12373b69e-kube-api-access-gz82n\") pod \"heat-db-sync-s7t6l\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:02 crc kubenswrapper[5034]: I0105 23:35:02.765540 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:03 crc kubenswrapper[5034]: I0105 23:35:03.299879 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-s7t6l"] Jan 05 23:35:04 crc kubenswrapper[5034]: I0105 23:35:04.172068 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-s7t6l" event={"ID":"999257cc-1aca-404f-834c-ddb12373b69e","Type":"ContainerStarted","Data":"8b3f73710ef11be19f99e5b7781f9cafc8cacb16205e9168ffcbef4c5d210767"} Jan 05 23:35:05 crc kubenswrapper[5034]: I0105 23:35:05.970331 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:35:05 crc kubenswrapper[5034]: I0105 23:35:05.971690 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:35:06 crc kubenswrapper[5034]: I0105 23:35:06.726157 5034 scope.go:117] "RemoveContainer" containerID="9d8ad031803098c4ea7fc6879c9f9b50adc7c006ad8a355e7661d71ee4741182" Jan 05 23:35:10 crc kubenswrapper[5034]: I0105 23:35:10.610672 5034 scope.go:117] "RemoveContainer" containerID="e43198f3e0afb2c482419addfa9ffe06c69e6dc57acb538cea0a31d10830d314" Jan 05 23:35:10 crc kubenswrapper[5034]: I0105 23:35:10.950873 5034 scope.go:117] "RemoveContainer" containerID="8dd3ed86e94ce5a241b8e6895ee26f85094125cfb1c85b8acdb6351a72f8bc1c" Jan 05 23:35:10 crc kubenswrapper[5034]: I0105 23:35:10.987321 5034 scope.go:117] "RemoveContainer" containerID="c132f56f134a1a3e60bc277bd2a3ff68530ec887930cb1da18f1ae44a2e99706" Jan 05 23:35:12 crc kubenswrapper[5034]: I0105 23:35:12.044732 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-k9nvl"] Jan 05 23:35:12 crc kubenswrapper[5034]: I0105 23:35:12.055317 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-k9nvl"] Jan 05 23:35:12 crc kubenswrapper[5034]: I0105 23:35:12.257925 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-s7t6l" event={"ID":"999257cc-1aca-404f-834c-ddb12373b69e","Type":"ContainerStarted","Data":"95aa58bc50c9fc9f6a343d379414e0c304f7a138b92dbea700d3d48d50801cc0"} Jan 05 23:35:12 crc kubenswrapper[5034]: I0105 23:35:12.273669 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-s7t6l" podStartSLOduration=2.548051684 podStartE2EDuration="10.273650536s" podCreationTimestamp="2026-01-05 23:35:02 +0000 UTC" firstStartedPulling="2026-01-05 23:35:03.291766777 +0000 UTC m=+6195.663766216" lastFinishedPulling="2026-01-05 23:35:11.017365629 +0000 UTC m=+6203.389365068" observedRunningTime="2026-01-05 23:35:12.271907376 +0000 UTC m=+6204.643906825" watchObservedRunningTime="2026-01-05 23:35:12.273650536 +0000 UTC m=+6204.645649975" Jan 05 23:35:13 crc kubenswrapper[5034]: I0105 23:35:13.031056 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8b1a-account-create-update-chql2"] Jan 05 23:35:13 crc kubenswrapper[5034]: I0105 23:35:13.041383 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8b1a-account-create-update-chql2"] Jan 05 23:35:13 crc kubenswrapper[5034]: I0105 23:35:13.268040 5034 generic.go:334] "Generic (PLEG): container finished" podID="999257cc-1aca-404f-834c-ddb12373b69e" containerID="95aa58bc50c9fc9f6a343d379414e0c304f7a138b92dbea700d3d48d50801cc0" exitCode=0 Jan 05 23:35:13 crc kubenswrapper[5034]: I0105 23:35:13.268119 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-s7t6l" event={"ID":"999257cc-1aca-404f-834c-ddb12373b69e","Type":"ContainerDied","Data":"95aa58bc50c9fc9f6a343d379414e0c304f7a138b92dbea700d3d48d50801cc0"} Jan 05 23:35:13 crc kubenswrapper[5034]: I0105 23:35:13.856701 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89db938a-84f1-4a3b-942a-322a862a9987" path="/var/lib/kubelet/pods/89db938a-84f1-4a3b-942a-322a862a9987/volumes" Jan 05 23:35:13 crc kubenswrapper[5034]: I0105 23:35:13.857652 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa6fd90b-a398-4f8c-b3fe-17f18f6acba8" path="/var/lib/kubelet/pods/fa6fd90b-a398-4f8c-b3fe-17f18f6acba8/volumes" Jan 05 23:35:14 crc kubenswrapper[5034]: I0105 23:35:14.633149 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:14 crc kubenswrapper[5034]: I0105 23:35:14.749901 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-config-data\") pod \"999257cc-1aca-404f-834c-ddb12373b69e\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " Jan 05 23:35:14 crc kubenswrapper[5034]: I0105 23:35:14.749948 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz82n\" (UniqueName: \"kubernetes.io/projected/999257cc-1aca-404f-834c-ddb12373b69e-kube-api-access-gz82n\") pod \"999257cc-1aca-404f-834c-ddb12373b69e\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " Jan 05 23:35:14 crc kubenswrapper[5034]: I0105 23:35:14.750170 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-combined-ca-bundle\") pod \"999257cc-1aca-404f-834c-ddb12373b69e\" (UID: \"999257cc-1aca-404f-834c-ddb12373b69e\") " Jan 05 23:35:15 crc kubenswrapper[5034]: I0105 23:35:15.290857 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-s7t6l" event={"ID":"999257cc-1aca-404f-834c-ddb12373b69e","Type":"ContainerDied","Data":"8b3f73710ef11be19f99e5b7781f9cafc8cacb16205e9168ffcbef4c5d210767"} Jan 05 23:35:15 crc kubenswrapper[5034]: I0105 23:35:15.291660 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3f73710ef11be19f99e5b7781f9cafc8cacb16205e9168ffcbef4c5d210767" Jan 05 23:35:15 crc kubenswrapper[5034]: I0105 23:35:15.291601 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-s7t6l" Jan 05 23:35:15 crc kubenswrapper[5034]: I0105 23:35:15.468606 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/999257cc-1aca-404f-834c-ddb12373b69e-kube-api-access-gz82n" (OuterVolumeSpecName: "kube-api-access-gz82n") pod "999257cc-1aca-404f-834c-ddb12373b69e" (UID: "999257cc-1aca-404f-834c-ddb12373b69e"). InnerVolumeSpecName "kube-api-access-gz82n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:35:15 crc kubenswrapper[5034]: I0105 23:35:15.483432 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "999257cc-1aca-404f-834c-ddb12373b69e" (UID: "999257cc-1aca-404f-834c-ddb12373b69e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:15 crc kubenswrapper[5034]: I0105 23:35:15.530558 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-config-data" (OuterVolumeSpecName: "config-data") pod "999257cc-1aca-404f-834c-ddb12373b69e" (UID: "999257cc-1aca-404f-834c-ddb12373b69e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:15 crc kubenswrapper[5034]: I0105 23:35:15.567662 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:15 crc kubenswrapper[5034]: I0105 23:35:15.567990 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/999257cc-1aca-404f-834c-ddb12373b69e-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:15 crc kubenswrapper[5034]: I0105 23:35:15.568119 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz82n\" (UniqueName: \"kubernetes.io/projected/999257cc-1aca-404f-834c-ddb12373b69e-kube-api-access-gz82n\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:15 crc kubenswrapper[5034]: I0105 23:35:15.971798 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d7cbf459d-njclw" podUID="3aac7508-cac0-47ed-8636-25c715c0b8b9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.120:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.120:8443: connect: connection refused" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.539856 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6f5cf6f656-7q457"] Jan 05 23:35:16 crc kubenswrapper[5034]: E0105 23:35:16.540829 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999257cc-1aca-404f-834c-ddb12373b69e" containerName="heat-db-sync" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.540864 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="999257cc-1aca-404f-834c-ddb12373b69e" containerName="heat-db-sync" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.541170 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="999257cc-1aca-404f-834c-ddb12373b69e" containerName="heat-db-sync" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.542325 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.546768 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-9vsgg" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.547146 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.547277 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.566621 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6f5cf6f656-7q457"] Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.600972 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz95q\" (UniqueName: \"kubernetes.io/projected/408a9063-dc19-4309-b9e9-a917f2db1b59-kube-api-access-fz95q\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.601052 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.601172 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data-custom\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.601313 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-combined-ca-bundle\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.646643 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-576b64bdd6-rrg5m"] Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.648545 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.651828 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.673950 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-576b64bdd6-rrg5m"] Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.702963 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.703030 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-combined-ca-bundle\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.703085 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data-custom\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.703851 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-combined-ca-bundle\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.703906 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhgv7\" (UniqueName: \"kubernetes.io/projected/6130aa4a-d7b0-47a2-a265-1e82a036be25-kube-api-access-fhgv7\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.703950 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.703985 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data-custom\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.704037 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz95q\" (UniqueName: \"kubernetes.io/projected/408a9063-dc19-4309-b9e9-a917f2db1b59-kube-api-access-fz95q\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.711980 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.734587 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-combined-ca-bundle\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.737824 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz95q\" (UniqueName: \"kubernetes.io/projected/408a9063-dc19-4309-b9e9-a917f2db1b59-kube-api-access-fz95q\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.739854 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data-custom\") pod \"heat-engine-6f5cf6f656-7q457\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.755225 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-68b5fd59cf-c7njh"] Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.756777 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.765442 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.795279 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-68b5fd59cf-c7njh"] Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.807644 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-combined-ca-bundle\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.807747 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data-custom\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.808624 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhgv7\" (UniqueName: \"kubernetes.io/projected/6130aa4a-d7b0-47a2-a265-1e82a036be25-kube-api-access-fhgv7\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.808723 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.808795 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data-custom\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.808915 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.808982 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-combined-ca-bundle\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.809073 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlmvz\" (UniqueName: \"kubernetes.io/projected/d5aaaffb-065c-4bcf-adfb-9503525dd2da-kube-api-access-tlmvz\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.815362 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data-custom\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.818185 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.826714 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhgv7\" (UniqueName: \"kubernetes.io/projected/6130aa4a-d7b0-47a2-a265-1e82a036be25-kube-api-access-fhgv7\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.835851 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-combined-ca-bundle\") pod \"heat-cfnapi-576b64bdd6-rrg5m\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.885232 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.914353 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data-custom\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.914592 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.914673 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlmvz\" (UniqueName: \"kubernetes.io/projected/d5aaaffb-065c-4bcf-adfb-9503525dd2da-kube-api-access-tlmvz\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.914777 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-combined-ca-bundle\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.919347 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data-custom\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.919736 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-combined-ca-bundle\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.924764 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.942775 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlmvz\" (UniqueName: \"kubernetes.io/projected/d5aaaffb-065c-4bcf-adfb-9503525dd2da-kube-api-access-tlmvz\") pod \"heat-api-68b5fd59cf-c7njh\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.969188 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:16 crc kubenswrapper[5034]: I0105 23:35:16.977532 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:17 crc kubenswrapper[5034]: I0105 23:35:17.556389 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6f5cf6f656-7q457"] Jan 05 23:35:17 crc kubenswrapper[5034]: I0105 23:35:17.642655 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-576b64bdd6-rrg5m"] Jan 05 23:35:17 crc kubenswrapper[5034]: I0105 23:35:17.669999 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-68b5fd59cf-c7njh"] Jan 05 23:35:18 crc kubenswrapper[5034]: I0105 23:35:18.325255 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68b5fd59cf-c7njh" event={"ID":"d5aaaffb-065c-4bcf-adfb-9503525dd2da","Type":"ContainerStarted","Data":"ec55b7f46e844ea7ab55d13e349f35b525b2ee99d305d546b88eb444b38466f7"} Jan 05 23:35:18 crc kubenswrapper[5034]: I0105 23:35:18.329554 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" event={"ID":"6130aa4a-d7b0-47a2-a265-1e82a036be25","Type":"ContainerStarted","Data":"d1ac58ef6a2bbda872e9aa6794ad071792b9c006664a3169397fd1aa27e43be0"} Jan 05 23:35:18 crc kubenswrapper[5034]: I0105 23:35:18.332388 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f5cf6f656-7q457" event={"ID":"408a9063-dc19-4309-b9e9-a917f2db1b59","Type":"ContainerStarted","Data":"9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6"} Jan 05 23:35:18 crc kubenswrapper[5034]: I0105 23:35:18.332463 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f5cf6f656-7q457" event={"ID":"408a9063-dc19-4309-b9e9-a917f2db1b59","Type":"ContainerStarted","Data":"dcd1eaa54e07fa38d6d63bf0909d39f6f37f8392d32bf54ec7aeeab1c321e9c0"} Jan 05 23:35:18 crc kubenswrapper[5034]: I0105 23:35:18.332566 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:18 crc kubenswrapper[5034]: I0105 23:35:18.365868 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6f5cf6f656-7q457" podStartSLOduration=2.365846489 podStartE2EDuration="2.365846489s" podCreationTimestamp="2026-01-05 23:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:35:18.346456886 +0000 UTC m=+6210.718456325" watchObservedRunningTime="2026-01-05 23:35:18.365846489 +0000 UTC m=+6210.737845928" Jan 05 23:35:20 crc kubenswrapper[5034]: I0105 23:35:20.085465 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xcgcs"] Jan 05 23:35:20 crc kubenswrapper[5034]: I0105 23:35:20.099547 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xcgcs"] Jan 05 23:35:21 crc kubenswrapper[5034]: I0105 23:35:21.365522 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68b5fd59cf-c7njh" event={"ID":"d5aaaffb-065c-4bcf-adfb-9503525dd2da","Type":"ContainerStarted","Data":"02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14"} Jan 05 23:35:21 crc kubenswrapper[5034]: I0105 23:35:21.365998 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:21 crc kubenswrapper[5034]: I0105 23:35:21.367758 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" event={"ID":"6130aa4a-d7b0-47a2-a265-1e82a036be25","Type":"ContainerStarted","Data":"a4664829c25c0553698cad57dc9816c81e9dd8add0e98f388245829af23975b3"} Jan 05 23:35:21 crc kubenswrapper[5034]: I0105 23:35:21.368086 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:21 crc kubenswrapper[5034]: I0105 23:35:21.429485 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-68b5fd59cf-c7njh" podStartSLOduration=2.826090642 podStartE2EDuration="5.429457436s" podCreationTimestamp="2026-01-05 23:35:16 +0000 UTC" firstStartedPulling="2026-01-05 23:35:17.688273891 +0000 UTC m=+6210.060273330" lastFinishedPulling="2026-01-05 23:35:20.291640685 +0000 UTC m=+6212.663640124" observedRunningTime="2026-01-05 23:35:21.421398246 +0000 UTC m=+6213.793397685" watchObservedRunningTime="2026-01-05 23:35:21.429457436 +0000 UTC m=+6213.801456875" Jan 05 23:35:21 crc kubenswrapper[5034]: I0105 23:35:21.467734 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" podStartSLOduration=2.844277431 podStartE2EDuration="5.467711357s" podCreationTimestamp="2026-01-05 23:35:16 +0000 UTC" firstStartedPulling="2026-01-05 23:35:17.670192485 +0000 UTC m=+6210.042191914" lastFinishedPulling="2026-01-05 23:35:20.293626401 +0000 UTC m=+6212.665625840" observedRunningTime="2026-01-05 23:35:21.466970186 +0000 UTC m=+6213.838969625" watchObservedRunningTime="2026-01-05 23:35:21.467711357 +0000 UTC m=+6213.839710796" Jan 05 23:35:21 crc kubenswrapper[5034]: I0105 23:35:21.851293 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f351d493-b50b-4af9-bdce-a40e38e34d49" path="/var/lib/kubelet/pods/f351d493-b50b-4af9-bdce-a40e38e34d49/volumes" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.087929 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-8ffd6ffc4-pmz56"] Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.091411 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.102715 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8ffd6ffc4-pmz56"] Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.128713 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6df9874d88-kxwsz"] Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.130491 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.159583 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19228d32-e4a4-408a-9951-730f63c4e7e7-config-data-custom\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.159697 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19228d32-e4a4-408a-9951-730f63c4e7e7-combined-ca-bundle\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.159833 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19228d32-e4a4-408a-9951-730f63c4e7e7-config-data\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.159869 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hbg\" (UniqueName: \"kubernetes.io/projected/19228d32-e4a4-408a-9951-730f63c4e7e7-kube-api-access-x5hbg\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.167776 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6b86954d65-ppnjv"] Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.169267 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.192474 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6df9874d88-kxwsz"] Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.203930 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b86954d65-ppnjv"] Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.262672 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-combined-ca-bundle\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.262737 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.262871 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data-custom\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.262964 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19228d32-e4a4-408a-9951-730f63c4e7e7-config-data-custom\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.263186 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19228d32-e4a4-408a-9951-730f63c4e7e7-combined-ca-bundle\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.264010 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.264040 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdjsm\" (UniqueName: \"kubernetes.io/projected/b89ca491-4b57-4319-acda-9e1023b90d98-kube-api-access-tdjsm\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.264336 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data-custom\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.264407 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wf2c\" (UniqueName: \"kubernetes.io/projected/b097b141-506c-4819-a744-b80f525b7ca3-kube-api-access-6wf2c\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.264460 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19228d32-e4a4-408a-9951-730f63c4e7e7-config-data\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.264502 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5hbg\" (UniqueName: \"kubernetes.io/projected/19228d32-e4a4-408a-9951-730f63c4e7e7-kube-api-access-x5hbg\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.264532 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-combined-ca-bundle\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.271700 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19228d32-e4a4-408a-9951-730f63c4e7e7-config-data\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.271776 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19228d32-e4a4-408a-9951-730f63c4e7e7-combined-ca-bundle\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.279153 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19228d32-e4a4-408a-9951-730f63c4e7e7-config-data-custom\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.281436 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5hbg\" (UniqueName: \"kubernetes.io/projected/19228d32-e4a4-408a-9951-730f63c4e7e7-kube-api-access-x5hbg\") pod \"heat-engine-8ffd6ffc4-pmz56\" (UID: \"19228d32-e4a4-408a-9951-730f63c4e7e7\") " pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.366961 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.367443 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdjsm\" (UniqueName: \"kubernetes.io/projected/b89ca491-4b57-4319-acda-9e1023b90d98-kube-api-access-tdjsm\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.367546 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data-custom\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.367584 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wf2c\" (UniqueName: \"kubernetes.io/projected/b097b141-506c-4819-a744-b80f525b7ca3-kube-api-access-6wf2c\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.367632 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-combined-ca-bundle\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.368363 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-combined-ca-bundle\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.368417 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.368499 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data-custom\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.371954 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-combined-ca-bundle\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.372005 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.372208 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data-custom\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.374035 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-combined-ca-bundle\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.374255 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.379861 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data-custom\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.385256 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdjsm\" (UniqueName: \"kubernetes.io/projected/b89ca491-4b57-4319-acda-9e1023b90d98-kube-api-access-tdjsm\") pod \"heat-cfnapi-6df9874d88-kxwsz\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.386627 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wf2c\" (UniqueName: \"kubernetes.io/projected/b097b141-506c-4819-a744-b80f525b7ca3-kube-api-access-6wf2c\") pod \"heat-api-6b86954d65-ppnjv\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.443739 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.464092 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:25 crc kubenswrapper[5034]: I0105 23:35:25.486203 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.227928 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8ffd6ffc4-pmz56"] Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.291901 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b86954d65-ppnjv"] Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.307210 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6df9874d88-kxwsz"] Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.437151 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8ffd6ffc4-pmz56" event={"ID":"19228d32-e4a4-408a-9951-730f63c4e7e7","Type":"ContainerStarted","Data":"7c7f6a66b257c9bc0b0302b53bad0023fb8be9c2c34ecf1f8e864c9620a1376c"} Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.439375 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b86954d65-ppnjv" event={"ID":"b097b141-506c-4819-a744-b80f525b7ca3","Type":"ContainerStarted","Data":"d077e38261b22ec7dc02fbf222a13b733db668879a23d75645ede656e30da0f6"} Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.442329 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" event={"ID":"b89ca491-4b57-4319-acda-9e1023b90d98","Type":"ContainerStarted","Data":"a4b51c3682e1eb01d0121b27cfece969244ffa55ab776ecd12ec502469b658ef"} Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.711989 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-68b5fd59cf-c7njh"] Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.712504 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-68b5fd59cf-c7njh" podUID="d5aaaffb-065c-4bcf-adfb-9503525dd2da" containerName="heat-api" containerID="cri-o://02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14" gracePeriod=60 Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.734823 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-68b5fd59cf-c7njh" podUID="d5aaaffb-065c-4bcf-adfb-9503525dd2da" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.126:8004/healthcheck\": EOF" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.735364 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-576b64bdd6-rrg5m"] Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.735720 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" podUID="6130aa4a-d7b0-47a2-a265-1e82a036be25" containerName="heat-cfnapi" containerID="cri-o://a4664829c25c0553698cad57dc9816c81e9dd8add0e98f388245829af23975b3" gracePeriod=60 Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.807885 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7f9ccf9bf6-cd8v7"] Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.810816 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.815690 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.815925 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.833521 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-65bf95755f-d74sv"] Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.835537 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.843592 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.844416 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.856343 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7f9ccf9bf6-cd8v7"] Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.874326 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65bf95755f-d74sv"] Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949355 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-internal-tls-certs\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949428 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-combined-ca-bundle\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949462 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-public-tls-certs\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949517 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46kzp\" (UniqueName: \"kubernetes.io/projected/45da723d-b34f-40ef-8938-abf9f614c451-kube-api-access-46kzp\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949556 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-config-data\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949597 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-config-data-custom\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949669 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnrvs\" (UniqueName: \"kubernetes.io/projected/90de7a12-45fa-4f42-a94b-528884fd2afb-kube-api-access-tnrvs\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949759 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-internal-tls-certs\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949822 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-config-data-custom\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949847 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-public-tls-certs\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949901 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-combined-ca-bundle\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:26 crc kubenswrapper[5034]: I0105 23:35:26.949992 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-config-data\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052566 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-internal-tls-certs\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052619 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-combined-ca-bundle\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052643 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-public-tls-certs\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052671 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46kzp\" (UniqueName: \"kubernetes.io/projected/45da723d-b34f-40ef-8938-abf9f614c451-kube-api-access-46kzp\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052708 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-config-data\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052740 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-config-data-custom\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052776 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnrvs\" (UniqueName: \"kubernetes.io/projected/90de7a12-45fa-4f42-a94b-528884fd2afb-kube-api-access-tnrvs\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052810 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-internal-tls-certs\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052847 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-config-data-custom\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052863 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-public-tls-certs\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052897 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-combined-ca-bundle\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.052952 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-config-data\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.064181 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-internal-tls-certs\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.064250 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-combined-ca-bundle\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.065730 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-combined-ca-bundle\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.067555 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-config-data\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.069994 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-public-tls-certs\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.071731 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-internal-tls-certs\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.072046 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-config-data-custom\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.072911 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90de7a12-45fa-4f42-a94b-528884fd2afb-config-data-custom\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.077743 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnrvs\" (UniqueName: \"kubernetes.io/projected/90de7a12-45fa-4f42-a94b-528884fd2afb-kube-api-access-tnrvs\") pod \"heat-api-7f9ccf9bf6-cd8v7\" (UID: \"90de7a12-45fa-4f42-a94b-528884fd2afb\") " pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.079912 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-public-tls-certs\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.081936 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46kzp\" (UniqueName: \"kubernetes.io/projected/45da723d-b34f-40ef-8938-abf9f614c451-kube-api-access-46kzp\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.082540 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45da723d-b34f-40ef-8938-abf9f614c451-config-data\") pod \"heat-cfnapi-65bf95755f-d74sv\" (UID: \"45da723d-b34f-40ef-8938-abf9f614c451\") " pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.178653 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.194173 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.308446 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" podUID="6130aa4a-d7b0-47a2-a265-1e82a036be25" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.125:8000/healthcheck\": read tcp 10.217.0.2:44936->10.217.1.125:8000: read: connection reset by peer" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.309899 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" podUID="6130aa4a-d7b0-47a2-a265-1e82a036be25" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.125:8000/healthcheck\": dial tcp 10.217.1.125:8000: connect: connection refused" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.458744 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8ffd6ffc4-pmz56" event={"ID":"19228d32-e4a4-408a-9951-730f63c4e7e7","Type":"ContainerStarted","Data":"fd8f677846b37bf904d1d05b6383a9d8a03225d820f105998fcc6ac5be4a7215"} Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.460171 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.483750 5034 generic.go:334] "Generic (PLEG): container finished" podID="b097b141-506c-4819-a744-b80f525b7ca3" containerID="c1091dbf6598adc7c95bc7f1f800e67d8d32fa4ff7e3eee57b3d23fca14ad92b" exitCode=1 Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.484239 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b86954d65-ppnjv" event={"ID":"b097b141-506c-4819-a744-b80f525b7ca3","Type":"ContainerDied","Data":"c1091dbf6598adc7c95bc7f1f800e67d8d32fa4ff7e3eee57b3d23fca14ad92b"} Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.484801 5034 scope.go:117] "RemoveContainer" containerID="c1091dbf6598adc7c95bc7f1f800e67d8d32fa4ff7e3eee57b3d23fca14ad92b" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.487768 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-8ffd6ffc4-pmz56" podStartSLOduration=2.487750931 podStartE2EDuration="2.487750931s" podCreationTimestamp="2026-01-05 23:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:35:27.480908566 +0000 UTC m=+6219.852908015" watchObservedRunningTime="2026-01-05 23:35:27.487750931 +0000 UTC m=+6219.859750370" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.505669 5034 generic.go:334] "Generic (PLEG): container finished" podID="b89ca491-4b57-4319-acda-9e1023b90d98" containerID="378def2383cadd07e2fcf0b81f6e0a744b9eb9253d83d24675e9d007452fde5e" exitCode=1 Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.506371 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" event={"ID":"b89ca491-4b57-4319-acda-9e1023b90d98","Type":"ContainerDied","Data":"378def2383cadd07e2fcf0b81f6e0a744b9eb9253d83d24675e9d007452fde5e"} Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.506625 5034 scope.go:117] "RemoveContainer" containerID="378def2383cadd07e2fcf0b81f6e0a744b9eb9253d83d24675e9d007452fde5e" Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.518464 5034 generic.go:334] "Generic (PLEG): container finished" podID="6130aa4a-d7b0-47a2-a265-1e82a036be25" containerID="a4664829c25c0553698cad57dc9816c81e9dd8add0e98f388245829af23975b3" exitCode=0 Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.518526 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" event={"ID":"6130aa4a-d7b0-47a2-a265-1e82a036be25","Type":"ContainerDied","Data":"a4664829c25c0553698cad57dc9816c81e9dd8add0e98f388245829af23975b3"} Jan 05 23:35:27 crc kubenswrapper[5034]: I0105 23:35:27.776238 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7f9ccf9bf6-cd8v7"] Jan 05 23:35:28 crc kubenswrapper[5034]: W0105 23:35:28.018640 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45da723d_b34f_40ef_8938_abf9f614c451.slice/crio-dd3d0893db588b0d9446c85e0c0711d127b7aab719d2611f0628d4e434cf5883 WatchSource:0}: Error finding container dd3d0893db588b0d9446c85e0c0711d127b7aab719d2611f0628d4e434cf5883: Status 404 returned error can't find the container with id dd3d0893db588b0d9446c85e0c0711d127b7aab719d2611f0628d4e434cf5883 Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.021778 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65bf95755f-d74sv"] Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.072235 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.197011 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data\") pod \"6130aa4a-d7b0-47a2-a265-1e82a036be25\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.197164 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data-custom\") pod \"6130aa4a-d7b0-47a2-a265-1e82a036be25\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.197464 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-combined-ca-bundle\") pod \"6130aa4a-d7b0-47a2-a265-1e82a036be25\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.197511 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhgv7\" (UniqueName: \"kubernetes.io/projected/6130aa4a-d7b0-47a2-a265-1e82a036be25-kube-api-access-fhgv7\") pod \"6130aa4a-d7b0-47a2-a265-1e82a036be25\" (UID: \"6130aa4a-d7b0-47a2-a265-1e82a036be25\") " Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.204801 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6130aa4a-d7b0-47a2-a265-1e82a036be25-kube-api-access-fhgv7" (OuterVolumeSpecName: "kube-api-access-fhgv7") pod "6130aa4a-d7b0-47a2-a265-1e82a036be25" (UID: "6130aa4a-d7b0-47a2-a265-1e82a036be25"). InnerVolumeSpecName "kube-api-access-fhgv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.206257 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6130aa4a-d7b0-47a2-a265-1e82a036be25" (UID: "6130aa4a-d7b0-47a2-a265-1e82a036be25"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.265368 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6130aa4a-d7b0-47a2-a265-1e82a036be25" (UID: "6130aa4a-d7b0-47a2-a265-1e82a036be25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.300224 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.300345 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhgv7\" (UniqueName: \"kubernetes.io/projected/6130aa4a-d7b0-47a2-a265-1e82a036be25-kube-api-access-fhgv7\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.300357 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.318777 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data" (OuterVolumeSpecName: "config-data") pod "6130aa4a-d7b0-47a2-a265-1e82a036be25" (UID: "6130aa4a-d7b0-47a2-a265-1e82a036be25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.402384 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6130aa4a-d7b0-47a2-a265-1e82a036be25-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.532198 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7f9ccf9bf6-cd8v7" event={"ID":"90de7a12-45fa-4f42-a94b-528884fd2afb","Type":"ContainerStarted","Data":"1b47d097e4f81606e277bbfe6c74df932f5122a638f052e9efe485838518173c"} Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.532257 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7f9ccf9bf6-cd8v7" event={"ID":"90de7a12-45fa-4f42-a94b-528884fd2afb","Type":"ContainerStarted","Data":"d2bae60071a74b670bdad3087b1ab0c0e8caa967865af1855a1d219e34c71b4e"} Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.532480 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.534349 5034 generic.go:334] "Generic (PLEG): container finished" podID="b097b141-506c-4819-a744-b80f525b7ca3" containerID="058d6e752ecc173f361761ad4b7fb87ae12529a7c78c0fb439074cea9657bce2" exitCode=1 Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.534392 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b86954d65-ppnjv" event={"ID":"b097b141-506c-4819-a744-b80f525b7ca3","Type":"ContainerDied","Data":"058d6e752ecc173f361761ad4b7fb87ae12529a7c78c0fb439074cea9657bce2"} Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.534421 5034 scope.go:117] "RemoveContainer" containerID="c1091dbf6598adc7c95bc7f1f800e67d8d32fa4ff7e3eee57b3d23fca14ad92b" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.535507 5034 scope.go:117] "RemoveContainer" containerID="058d6e752ecc173f361761ad4b7fb87ae12529a7c78c0fb439074cea9657bce2" Jan 05 23:35:28 crc kubenswrapper[5034]: E0105 23:35:28.535762 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6b86954d65-ppnjv_openstack(b097b141-506c-4819-a744-b80f525b7ca3)\"" pod="openstack/heat-api-6b86954d65-ppnjv" podUID="b097b141-506c-4819-a744-b80f525b7ca3" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.538965 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65bf95755f-d74sv" event={"ID":"45da723d-b34f-40ef-8938-abf9f614c451","Type":"ContainerStarted","Data":"dd3d0893db588b0d9446c85e0c0711d127b7aab719d2611f0628d4e434cf5883"} Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.545321 5034 generic.go:334] "Generic (PLEG): container finished" podID="b89ca491-4b57-4319-acda-9e1023b90d98" containerID="af66d179939477d30650334f25e7eeb16eb40a0875d876851de6e3e44583d251" exitCode=1 Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.546330 5034 scope.go:117] "RemoveContainer" containerID="af66d179939477d30650334f25e7eeb16eb40a0875d876851de6e3e44583d251" Jan 05 23:35:28 crc kubenswrapper[5034]: E0105 23:35:28.546657 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6df9874d88-kxwsz_openstack(b89ca491-4b57-4319-acda-9e1023b90d98)\"" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.546924 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" event={"ID":"b89ca491-4b57-4319-acda-9e1023b90d98","Type":"ContainerDied","Data":"af66d179939477d30650334f25e7eeb16eb40a0875d876851de6e3e44583d251"} Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.550147 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.550158 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-576b64bdd6-rrg5m" event={"ID":"6130aa4a-d7b0-47a2-a265-1e82a036be25","Type":"ContainerDied","Data":"d1ac58ef6a2bbda872e9aa6794ad071792b9c006664a3169397fd1aa27e43be0"} Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.573860 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7f9ccf9bf6-cd8v7" podStartSLOduration=2.573826386 podStartE2EDuration="2.573826386s" podCreationTimestamp="2026-01-05 23:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:35:28.556715998 +0000 UTC m=+6220.928715437" watchObservedRunningTime="2026-01-05 23:35:28.573826386 +0000 UTC m=+6220.945825825" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.666330 5034 scope.go:117] "RemoveContainer" containerID="378def2383cadd07e2fcf0b81f6e0a744b9eb9253d83d24675e9d007452fde5e" Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.700160 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-576b64bdd6-rrg5m"] Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.708840 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-576b64bdd6-rrg5m"] Jan 05 23:35:28 crc kubenswrapper[5034]: I0105 23:35:28.757747 5034 scope.go:117] "RemoveContainer" containerID="a4664829c25c0553698cad57dc9816c81e9dd8add0e98f388245829af23975b3" Jan 05 23:35:29 crc kubenswrapper[5034]: I0105 23:35:29.214135 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:35:29 crc kubenswrapper[5034]: I0105 23:35:29.563292 5034 scope.go:117] "RemoveContainer" containerID="af66d179939477d30650334f25e7eeb16eb40a0875d876851de6e3e44583d251" Jan 05 23:35:29 crc kubenswrapper[5034]: E0105 23:35:29.564848 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6df9874d88-kxwsz_openstack(b89ca491-4b57-4319-acda-9e1023b90d98)\"" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" Jan 05 23:35:29 crc kubenswrapper[5034]: I0105 23:35:29.569194 5034 scope.go:117] "RemoveContainer" containerID="058d6e752ecc173f361761ad4b7fb87ae12529a7c78c0fb439074cea9657bce2" Jan 05 23:35:29 crc kubenswrapper[5034]: E0105 23:35:29.569481 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6b86954d65-ppnjv_openstack(b097b141-506c-4819-a744-b80f525b7ca3)\"" pod="openstack/heat-api-6b86954d65-ppnjv" podUID="b097b141-506c-4819-a744-b80f525b7ca3" Jan 05 23:35:29 crc kubenswrapper[5034]: I0105 23:35:29.569890 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65bf95755f-d74sv" event={"ID":"45da723d-b34f-40ef-8938-abf9f614c451","Type":"ContainerStarted","Data":"63290d40ccf2e3bc638a17898e9cdea9667db81b529e06f82f4990f16a28000d"} Jan 05 23:35:29 crc kubenswrapper[5034]: I0105 23:35:29.642109 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-65bf95755f-d74sv" podStartSLOduration=3.642063772 podStartE2EDuration="3.642063772s" podCreationTimestamp="2026-01-05 23:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:35:29.636487833 +0000 UTC m=+6222.008487272" watchObservedRunningTime="2026-01-05 23:35:29.642063772 +0000 UTC m=+6222.014063211" Jan 05 23:35:29 crc kubenswrapper[5034]: I0105 23:35:29.856800 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6130aa4a-d7b0-47a2-a265-1e82a036be25" path="/var/lib/kubelet/pods/6130aa4a-d7b0-47a2-a265-1e82a036be25/volumes" Jan 05 23:35:30 crc kubenswrapper[5034]: I0105 23:35:30.464758 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:30 crc kubenswrapper[5034]: I0105 23:35:30.464824 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:30 crc kubenswrapper[5034]: I0105 23:35:30.487701 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:30 crc kubenswrapper[5034]: I0105 23:35:30.487775 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:30 crc kubenswrapper[5034]: I0105 23:35:30.579664 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:30 crc kubenswrapper[5034]: I0105 23:35:30.580176 5034 scope.go:117] "RemoveContainer" containerID="af66d179939477d30650334f25e7eeb16eb40a0875d876851de6e3e44583d251" Jan 05 23:35:30 crc kubenswrapper[5034]: I0105 23:35:30.580408 5034 scope.go:117] "RemoveContainer" containerID="058d6e752ecc173f361761ad4b7fb87ae12529a7c78c0fb439074cea9657bce2" Jan 05 23:35:30 crc kubenswrapper[5034]: E0105 23:35:30.580451 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6df9874d88-kxwsz_openstack(b89ca491-4b57-4319-acda-9e1023b90d98)\"" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" Jan 05 23:35:30 crc kubenswrapper[5034]: E0105 23:35:30.580687 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6b86954d65-ppnjv_openstack(b097b141-506c-4819-a744-b80f525b7ca3)\"" pod="openstack/heat-api-6b86954d65-ppnjv" podUID="b097b141-506c-4819-a744-b80f525b7ca3" Jan 05 23:35:31 crc kubenswrapper[5034]: I0105 23:35:31.591611 5034 scope.go:117] "RemoveContainer" containerID="af66d179939477d30650334f25e7eeb16eb40a0875d876851de6e3e44583d251" Jan 05 23:35:31 crc kubenswrapper[5034]: I0105 23:35:31.592090 5034 scope.go:117] "RemoveContainer" containerID="058d6e752ecc173f361761ad4b7fb87ae12529a7c78c0fb439074cea9657bce2" Jan 05 23:35:31 crc kubenswrapper[5034]: E0105 23:35:31.592377 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6df9874d88-kxwsz_openstack(b89ca491-4b57-4319-acda-9e1023b90d98)\"" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" Jan 05 23:35:31 crc kubenswrapper[5034]: E0105 23:35:31.592403 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6b86954d65-ppnjv_openstack(b097b141-506c-4819-a744-b80f525b7ca3)\"" pod="openstack/heat-api-6b86954d65-ppnjv" podUID="b097b141-506c-4819-a744-b80f525b7ca3" Jan 05 23:35:31 crc kubenswrapper[5034]: I0105 23:35:31.645773 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-d7cbf459d-njclw" Jan 05 23:35:31 crc kubenswrapper[5034]: I0105 23:35:31.707753 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7966f5c6c6-ct6c7"] Jan 05 23:35:31 crc kubenswrapper[5034]: I0105 23:35:31.708398 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7966f5c6c6-ct6c7" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon-log" containerID="cri-o://c848d8cc43e918f1830ecc176e77215c84e7ec172bf4cb09ccb4b241f3829099" gracePeriod=30 Jan 05 23:35:31 crc kubenswrapper[5034]: I0105 23:35:31.708557 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7966f5c6c6-ct6c7" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon" containerID="cri-o://e411a3eab2b7ca31e9c58eb4c15d002ed9d5332f570afe081989f59929ee4331" gracePeriod=30 Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.160851 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-68b5fd59cf-c7njh" podUID="d5aaaffb-065c-4bcf-adfb-9503525dd2da" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.126:8004/healthcheck\": read tcp 10.217.0.2:46762->10.217.1.126:8004: read: connection reset by peer" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.161470 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-68b5fd59cf-c7njh" podUID="d5aaaffb-065c-4bcf-adfb-9503525dd2da" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.126:8004/healthcheck\": dial tcp 10.217.1.126:8004: connect: connection refused" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.611680 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.628000 5034 generic.go:334] "Generic (PLEG): container finished" podID="d5aaaffb-065c-4bcf-adfb-9503525dd2da" containerID="02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14" exitCode=0 Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.628063 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68b5fd59cf-c7njh" event={"ID":"d5aaaffb-065c-4bcf-adfb-9503525dd2da","Type":"ContainerDied","Data":"02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14"} Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.628071 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68b5fd59cf-c7njh" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.628116 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68b5fd59cf-c7njh" event={"ID":"d5aaaffb-065c-4bcf-adfb-9503525dd2da","Type":"ContainerDied","Data":"ec55b7f46e844ea7ab55d13e349f35b525b2ee99d305d546b88eb444b38466f7"} Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.628137 5034 scope.go:117] "RemoveContainer" containerID="02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.666781 5034 scope.go:117] "RemoveContainer" containerID="02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14" Jan 05 23:35:32 crc kubenswrapper[5034]: E0105 23:35:32.667372 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14\": container with ID starting with 02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14 not found: ID does not exist" containerID="02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.667420 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14"} err="failed to get container status \"02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14\": rpc error: code = NotFound desc = could not find container \"02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14\": container with ID starting with 02e05e9592c3f3d5b53169821134030a256a0fd0622baf6d9a201a3662d56e14 not found: ID does not exist" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.763847 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data-custom\") pod \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.764107 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlmvz\" (UniqueName: \"kubernetes.io/projected/d5aaaffb-065c-4bcf-adfb-9503525dd2da-kube-api-access-tlmvz\") pod \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.764171 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data\") pod \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.764296 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-combined-ca-bundle\") pod \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\" (UID: \"d5aaaffb-065c-4bcf-adfb-9503525dd2da\") " Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.771359 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5aaaffb-065c-4bcf-adfb-9503525dd2da-kube-api-access-tlmvz" (OuterVolumeSpecName: "kube-api-access-tlmvz") pod "d5aaaffb-065c-4bcf-adfb-9503525dd2da" (UID: "d5aaaffb-065c-4bcf-adfb-9503525dd2da"). InnerVolumeSpecName "kube-api-access-tlmvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.771590 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d5aaaffb-065c-4bcf-adfb-9503525dd2da" (UID: "d5aaaffb-065c-4bcf-adfb-9503525dd2da"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.806612 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5aaaffb-065c-4bcf-adfb-9503525dd2da" (UID: "d5aaaffb-065c-4bcf-adfb-9503525dd2da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.833111 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data" (OuterVolumeSpecName: "config-data") pod "d5aaaffb-065c-4bcf-adfb-9503525dd2da" (UID: "d5aaaffb-065c-4bcf-adfb-9503525dd2da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.867495 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlmvz\" (UniqueName: \"kubernetes.io/projected/d5aaaffb-065c-4bcf-adfb-9503525dd2da-kube-api-access-tlmvz\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.867553 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.867565 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.867573 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5aaaffb-065c-4bcf-adfb-9503525dd2da-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.961864 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-68b5fd59cf-c7njh"] Jan 05 23:35:32 crc kubenswrapper[5034]: I0105 23:35:32.970128 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-68b5fd59cf-c7njh"] Jan 05 23:35:33 crc kubenswrapper[5034]: I0105 23:35:33.855378 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5aaaffb-065c-4bcf-adfb-9503525dd2da" path="/var/lib/kubelet/pods/d5aaaffb-065c-4bcf-adfb-9503525dd2da/volumes" Jan 05 23:35:35 crc kubenswrapper[5034]: I0105 23:35:35.660982 5034 generic.go:334] "Generic (PLEG): container finished" podID="171b4c86-ff76-4145-9324-c0c5a501e968" containerID="e411a3eab2b7ca31e9c58eb4c15d002ed9d5332f570afe081989f59929ee4331" exitCode=0 Jan 05 23:35:35 crc kubenswrapper[5034]: I0105 23:35:35.661054 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7966f5c6c6-ct6c7" event={"ID":"171b4c86-ff76-4145-9324-c0c5a501e968","Type":"ContainerDied","Data":"e411a3eab2b7ca31e9c58eb4c15d002ed9d5332f570afe081989f59929ee4331"} Jan 05 23:35:36 crc kubenswrapper[5034]: I0105 23:35:36.919817 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:38 crc kubenswrapper[5034]: I0105 23:35:38.579113 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7f9ccf9bf6-cd8v7" Jan 05 23:35:38 crc kubenswrapper[5034]: I0105 23:35:38.654469 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6b86954d65-ppnjv"] Jan 05 23:35:38 crc kubenswrapper[5034]: I0105 23:35:38.772471 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-65bf95755f-d74sv" Jan 05 23:35:38 crc kubenswrapper[5034]: I0105 23:35:38.832795 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6df9874d88-kxwsz"] Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.079343 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.243429 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data\") pod \"b097b141-506c-4819-a744-b80f525b7ca3\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.243735 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data-custom\") pod \"b097b141-506c-4819-a744-b80f525b7ca3\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.244000 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-combined-ca-bundle\") pod \"b097b141-506c-4819-a744-b80f525b7ca3\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.244166 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wf2c\" (UniqueName: \"kubernetes.io/projected/b097b141-506c-4819-a744-b80f525b7ca3-kube-api-access-6wf2c\") pod \"b097b141-506c-4819-a744-b80f525b7ca3\" (UID: \"b097b141-506c-4819-a744-b80f525b7ca3\") " Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.254627 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b097b141-506c-4819-a744-b80f525b7ca3-kube-api-access-6wf2c" (OuterVolumeSpecName: "kube-api-access-6wf2c") pod "b097b141-506c-4819-a744-b80f525b7ca3" (UID: "b097b141-506c-4819-a744-b80f525b7ca3"). InnerVolumeSpecName "kube-api-access-6wf2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.254614 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b097b141-506c-4819-a744-b80f525b7ca3" (UID: "b097b141-506c-4819-a744-b80f525b7ca3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.289485 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b097b141-506c-4819-a744-b80f525b7ca3" (UID: "b097b141-506c-4819-a744-b80f525b7ca3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.324427 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data" (OuterVolumeSpecName: "config-data") pod "b097b141-506c-4819-a744-b80f525b7ca3" (UID: "b097b141-506c-4819-a744-b80f525b7ca3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.347612 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.347650 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.347664 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b097b141-506c-4819-a744-b80f525b7ca3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.347677 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wf2c\" (UniqueName: \"kubernetes.io/projected/b097b141-506c-4819-a744-b80f525b7ca3-kube-api-access-6wf2c\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.396452 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.552767 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdjsm\" (UniqueName: \"kubernetes.io/projected/b89ca491-4b57-4319-acda-9e1023b90d98-kube-api-access-tdjsm\") pod \"b89ca491-4b57-4319-acda-9e1023b90d98\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.552857 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-combined-ca-bundle\") pod \"b89ca491-4b57-4319-acda-9e1023b90d98\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.552927 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data-custom\") pod \"b89ca491-4b57-4319-acda-9e1023b90d98\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.552952 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data\") pod \"b89ca491-4b57-4319-acda-9e1023b90d98\" (UID: \"b89ca491-4b57-4319-acda-9e1023b90d98\") " Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.557018 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b89ca491-4b57-4319-acda-9e1023b90d98-kube-api-access-tdjsm" (OuterVolumeSpecName: "kube-api-access-tdjsm") pod "b89ca491-4b57-4319-acda-9e1023b90d98" (UID: "b89ca491-4b57-4319-acda-9e1023b90d98"). InnerVolumeSpecName "kube-api-access-tdjsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.561223 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b89ca491-4b57-4319-acda-9e1023b90d98" (UID: "b89ca491-4b57-4319-acda-9e1023b90d98"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.583344 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b89ca491-4b57-4319-acda-9e1023b90d98" (UID: "b89ca491-4b57-4319-acda-9e1023b90d98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.645347 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data" (OuterVolumeSpecName: "config-data") pod "b89ca491-4b57-4319-acda-9e1023b90d98" (UID: "b89ca491-4b57-4319-acda-9e1023b90d98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.655913 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdjsm\" (UniqueName: \"kubernetes.io/projected/b89ca491-4b57-4319-acda-9e1023b90d98-kube-api-access-tdjsm\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.655955 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.655969 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.655980 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b89ca491-4b57-4319-acda-9e1023b90d98-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.707995 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b86954d65-ppnjv" event={"ID":"b097b141-506c-4819-a744-b80f525b7ca3","Type":"ContainerDied","Data":"d077e38261b22ec7dc02fbf222a13b733db668879a23d75645ede656e30da0f6"} Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.708057 5034 scope.go:117] "RemoveContainer" containerID="058d6e752ecc173f361761ad4b7fb87ae12529a7c78c0fb439074cea9657bce2" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.708003 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b86954d65-ppnjv" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.710177 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" event={"ID":"b89ca491-4b57-4319-acda-9e1023b90d98","Type":"ContainerDied","Data":"a4b51c3682e1eb01d0121b27cfece969244ffa55ab776ecd12ec502469b658ef"} Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.710344 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6df9874d88-kxwsz" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.756042 5034 scope.go:117] "RemoveContainer" containerID="af66d179939477d30650334f25e7eeb16eb40a0875d876851de6e3e44583d251" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.776994 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6df9874d88-kxwsz"] Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.794163 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6df9874d88-kxwsz"] Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.801238 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6b86954d65-ppnjv"] Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.812934 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6b86954d65-ppnjv"] Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.851419 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b097b141-506c-4819-a744-b80f525b7ca3" path="/var/lib/kubelet/pods/b097b141-506c-4819-a744-b80f525b7ca3/volumes" Jan 05 23:35:39 crc kubenswrapper[5034]: I0105 23:35:39.852203 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" path="/var/lib/kubelet/pods/b89ca491-4b57-4319-acda-9e1023b90d98/volumes" Jan 05 23:35:40 crc kubenswrapper[5034]: I0105 23:35:40.347832 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7966f5c6c6-ct6c7" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.116:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8443: connect: connection refused" Jan 05 23:35:45 crc kubenswrapper[5034]: I0105 23:35:45.472935 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-8ffd6ffc4-pmz56" Jan 05 23:35:45 crc kubenswrapper[5034]: I0105 23:35:45.533768 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6f5cf6f656-7q457"] Jan 05 23:35:45 crc kubenswrapper[5034]: I0105 23:35:45.534353 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6f5cf6f656-7q457" podUID="408a9063-dc19-4309-b9e9-a917f2db1b59" containerName="heat-engine" containerID="cri-o://9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6" gracePeriod=60 Jan 05 23:35:46 crc kubenswrapper[5034]: E0105 23:35:46.891164 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 23:35:46 crc kubenswrapper[5034]: E0105 23:35:46.893230 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 23:35:46 crc kubenswrapper[5034]: E0105 23:35:46.894671 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 23:35:46 crc kubenswrapper[5034]: E0105 23:35:46.894706 5034 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6f5cf6f656-7q457" podUID="408a9063-dc19-4309-b9e9-a917f2db1b59" containerName="heat-engine" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.737122 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fvzj8"] Jan 05 23:35:47 crc kubenswrapper[5034]: E0105 23:35:47.738499 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5aaaffb-065c-4bcf-adfb-9503525dd2da" containerName="heat-api" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.738521 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5aaaffb-065c-4bcf-adfb-9503525dd2da" containerName="heat-api" Jan 05 23:35:47 crc kubenswrapper[5034]: E0105 23:35:47.738555 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b097b141-506c-4819-a744-b80f525b7ca3" containerName="heat-api" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.738562 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="b097b141-506c-4819-a744-b80f525b7ca3" containerName="heat-api" Jan 05 23:35:47 crc kubenswrapper[5034]: E0105 23:35:47.738576 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" containerName="heat-cfnapi" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.738586 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" containerName="heat-cfnapi" Jan 05 23:35:47 crc kubenswrapper[5034]: E0105 23:35:47.738602 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6130aa4a-d7b0-47a2-a265-1e82a036be25" containerName="heat-cfnapi" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.738611 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6130aa4a-d7b0-47a2-a265-1e82a036be25" containerName="heat-cfnapi" Jan 05 23:35:47 crc kubenswrapper[5034]: E0105 23:35:47.738625 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" containerName="heat-cfnapi" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.738631 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" containerName="heat-cfnapi" Jan 05 23:35:47 crc kubenswrapper[5034]: E0105 23:35:47.738649 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b097b141-506c-4819-a744-b80f525b7ca3" containerName="heat-api" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.738657 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="b097b141-506c-4819-a744-b80f525b7ca3" containerName="heat-api" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.738996 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="b097b141-506c-4819-a744-b80f525b7ca3" containerName="heat-api" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.739025 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6130aa4a-d7b0-47a2-a265-1e82a036be25" containerName="heat-cfnapi" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.739051 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5aaaffb-065c-4bcf-adfb-9503525dd2da" containerName="heat-api" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.739067 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" containerName="heat-cfnapi" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.739095 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="b89ca491-4b57-4319-acda-9e1023b90d98" containerName="heat-cfnapi" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.739778 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="b097b141-506c-4819-a744-b80f525b7ca3" containerName="heat-api" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.741672 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.756134 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fvzj8"] Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.874190 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp9wj\" (UniqueName: \"kubernetes.io/projected/7e69532a-3d9d-4796-b336-f696e943a116-kube-api-access-kp9wj\") pod \"certified-operators-fvzj8\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.874629 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-catalog-content\") pod \"certified-operators-fvzj8\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.874675 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-utilities\") pod \"certified-operators-fvzj8\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.976566 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-catalog-content\") pod \"certified-operators-fvzj8\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.976616 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-utilities\") pod \"certified-operators-fvzj8\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.976715 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp9wj\" (UniqueName: \"kubernetes.io/projected/7e69532a-3d9d-4796-b336-f696e943a116-kube-api-access-kp9wj\") pod \"certified-operators-fvzj8\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.977878 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-catalog-content\") pod \"certified-operators-fvzj8\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.978039 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-utilities\") pod \"certified-operators-fvzj8\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:47 crc kubenswrapper[5034]: I0105 23:35:47.997575 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp9wj\" (UniqueName: \"kubernetes.io/projected/7e69532a-3d9d-4796-b336-f696e943a116-kube-api-access-kp9wj\") pod \"certified-operators-fvzj8\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:48 crc kubenswrapper[5034]: I0105 23:35:48.068457 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:48 crc kubenswrapper[5034]: I0105 23:35:48.692507 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fvzj8"] Jan 05 23:35:48 crc kubenswrapper[5034]: I0105 23:35:48.807166 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvzj8" event={"ID":"7e69532a-3d9d-4796-b336-f696e943a116","Type":"ContainerStarted","Data":"f15355a027484953bbec9ec41b75943b64ccd9f78318c57fa7f84da03f0c03e5"} Jan 05 23:35:49 crc kubenswrapper[5034]: I0105 23:35:49.826842 5034 generic.go:334] "Generic (PLEG): container finished" podID="7e69532a-3d9d-4796-b336-f696e943a116" containerID="c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558" exitCode=0 Jan 05 23:35:49 crc kubenswrapper[5034]: I0105 23:35:49.826918 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvzj8" event={"ID":"7e69532a-3d9d-4796-b336-f696e943a116","Type":"ContainerDied","Data":"c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558"} Jan 05 23:35:50 crc kubenswrapper[5034]: I0105 23:35:50.346961 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7966f5c6c6-ct6c7" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.116:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8443: connect: connection refused" Jan 05 23:35:50 crc kubenswrapper[5034]: I0105 23:35:50.841477 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvzj8" event={"ID":"7e69532a-3d9d-4796-b336-f696e943a116","Type":"ContainerStarted","Data":"5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8"} Jan 05 23:35:51 crc kubenswrapper[5034]: I0105 23:35:51.856124 5034 generic.go:334] "Generic (PLEG): container finished" podID="7e69532a-3d9d-4796-b336-f696e943a116" containerID="5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8" exitCode=0 Jan 05 23:35:51 crc kubenswrapper[5034]: I0105 23:35:51.856231 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvzj8" event={"ID":"7e69532a-3d9d-4796-b336-f696e943a116","Type":"ContainerDied","Data":"5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8"} Jan 05 23:35:52 crc kubenswrapper[5034]: I0105 23:35:52.872493 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvzj8" event={"ID":"7e69532a-3d9d-4796-b336-f696e943a116","Type":"ContainerStarted","Data":"0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790"} Jan 05 23:35:52 crc kubenswrapper[5034]: I0105 23:35:52.897653 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fvzj8" podStartSLOduration=3.329908617 podStartE2EDuration="5.897628014s" podCreationTimestamp="2026-01-05 23:35:47 +0000 UTC" firstStartedPulling="2026-01-05 23:35:49.829695155 +0000 UTC m=+6242.201694594" lastFinishedPulling="2026-01-05 23:35:52.397414562 +0000 UTC m=+6244.769413991" observedRunningTime="2026-01-05 23:35:52.889896413 +0000 UTC m=+6245.261895852" watchObservedRunningTime="2026-01-05 23:35:52.897628014 +0000 UTC m=+6245.269627453" Jan 05 23:35:56 crc kubenswrapper[5034]: E0105 23:35:56.887710 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6 is running failed: container process not found" containerID="9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 23:35:56 crc kubenswrapper[5034]: E0105 23:35:56.891272 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6 is running failed: container process not found" containerID="9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 23:35:56 crc kubenswrapper[5034]: E0105 23:35:56.891677 5034 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6 is running failed: container process not found" containerID="9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 23:35:56 crc kubenswrapper[5034]: E0105 23:35:56.891742 5034 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-6f5cf6f656-7q457" podUID="408a9063-dc19-4309-b9e9-a917f2db1b59" containerName="heat-engine" Jan 05 23:35:56 crc kubenswrapper[5034]: I0105 23:35:56.935392 5034 generic.go:334] "Generic (PLEG): container finished" podID="408a9063-dc19-4309-b9e9-a917f2db1b59" containerID="9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6" exitCode=0 Jan 05 23:35:56 crc kubenswrapper[5034]: I0105 23:35:56.935451 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f5cf6f656-7q457" event={"ID":"408a9063-dc19-4309-b9e9-a917f2db1b59","Type":"ContainerDied","Data":"9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6"} Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.169038 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.307230 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data\") pod \"408a9063-dc19-4309-b9e9-a917f2db1b59\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.307447 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data-custom\") pod \"408a9063-dc19-4309-b9e9-a917f2db1b59\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.308344 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-combined-ca-bundle\") pod \"408a9063-dc19-4309-b9e9-a917f2db1b59\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.308504 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz95q\" (UniqueName: \"kubernetes.io/projected/408a9063-dc19-4309-b9e9-a917f2db1b59-kube-api-access-fz95q\") pod \"408a9063-dc19-4309-b9e9-a917f2db1b59\" (UID: \"408a9063-dc19-4309-b9e9-a917f2db1b59\") " Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.316143 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408a9063-dc19-4309-b9e9-a917f2db1b59-kube-api-access-fz95q" (OuterVolumeSpecName: "kube-api-access-fz95q") pod "408a9063-dc19-4309-b9e9-a917f2db1b59" (UID: "408a9063-dc19-4309-b9e9-a917f2db1b59"). InnerVolumeSpecName "kube-api-access-fz95q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.319369 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "408a9063-dc19-4309-b9e9-a917f2db1b59" (UID: "408a9063-dc19-4309-b9e9-a917f2db1b59"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.348713 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "408a9063-dc19-4309-b9e9-a917f2db1b59" (UID: "408a9063-dc19-4309-b9e9-a917f2db1b59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.368250 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data" (OuterVolumeSpecName: "config-data") pod "408a9063-dc19-4309-b9e9-a917f2db1b59" (UID: "408a9063-dc19-4309-b9e9-a917f2db1b59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.411896 5034 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.411938 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.411950 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz95q\" (UniqueName: \"kubernetes.io/projected/408a9063-dc19-4309-b9e9-a917f2db1b59-kube-api-access-fz95q\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.411961 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/408a9063-dc19-4309-b9e9-a917f2db1b59-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.945114 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f5cf6f656-7q457" event={"ID":"408a9063-dc19-4309-b9e9-a917f2db1b59","Type":"ContainerDied","Data":"dcd1eaa54e07fa38d6d63bf0909d39f6f37f8392d32bf54ec7aeeab1c321e9c0"} Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.946155 5034 scope.go:117] "RemoveContainer" containerID="9521ddf1a4b2ec5615733ba6ea025b5a1026fd4d67f3c9979fd12cf1355d58e6" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.945175 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f5cf6f656-7q457" Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.976399 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6f5cf6f656-7q457"] Jan 05 23:35:57 crc kubenswrapper[5034]: I0105 23:35:57.985128 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6f5cf6f656-7q457"] Jan 05 23:35:58 crc kubenswrapper[5034]: I0105 23:35:58.069044 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:58 crc kubenswrapper[5034]: I0105 23:35:58.069163 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:58 crc kubenswrapper[5034]: I0105 23:35:58.121654 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:59 crc kubenswrapper[5034]: I0105 23:35:59.013598 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:35:59 crc kubenswrapper[5034]: I0105 23:35:59.061071 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fvzj8"] Jan 05 23:35:59 crc kubenswrapper[5034]: I0105 23:35:59.850543 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="408a9063-dc19-4309-b9e9-a917f2db1b59" path="/var/lib/kubelet/pods/408a9063-dc19-4309-b9e9-a917f2db1b59/volumes" Jan 05 23:36:00 crc kubenswrapper[5034]: I0105 23:36:00.347643 5034 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7966f5c6c6-ct6c7" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.116:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8443: connect: connection refused" Jan 05 23:36:00 crc kubenswrapper[5034]: I0105 23:36:00.347766 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:36:00 crc kubenswrapper[5034]: I0105 23:36:00.981412 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fvzj8" podUID="7e69532a-3d9d-4796-b336-f696e943a116" containerName="registry-server" containerID="cri-o://0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790" gracePeriod=2 Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.451009 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.506401 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-catalog-content\") pod \"7e69532a-3d9d-4796-b336-f696e943a116\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.506682 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp9wj\" (UniqueName: \"kubernetes.io/projected/7e69532a-3d9d-4796-b336-f696e943a116-kube-api-access-kp9wj\") pod \"7e69532a-3d9d-4796-b336-f696e943a116\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.506803 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-utilities\") pod \"7e69532a-3d9d-4796-b336-f696e943a116\" (UID: \"7e69532a-3d9d-4796-b336-f696e943a116\") " Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.507353 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-utilities" (OuterVolumeSpecName: "utilities") pod "7e69532a-3d9d-4796-b336-f696e943a116" (UID: "7e69532a-3d9d-4796-b336-f696e943a116"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.507975 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.512301 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e69532a-3d9d-4796-b336-f696e943a116-kube-api-access-kp9wj" (OuterVolumeSpecName: "kube-api-access-kp9wj") pod "7e69532a-3d9d-4796-b336-f696e943a116" (UID: "7e69532a-3d9d-4796-b336-f696e943a116"). InnerVolumeSpecName "kube-api-access-kp9wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.569246 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e69532a-3d9d-4796-b336-f696e943a116" (UID: "7e69532a-3d9d-4796-b336-f696e943a116"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.611053 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp9wj\" (UniqueName: \"kubernetes.io/projected/7e69532a-3d9d-4796-b336-f696e943a116-kube-api-access-kp9wj\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.611114 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e69532a-3d9d-4796-b336-f696e943a116-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.992785 5034 generic.go:334] "Generic (PLEG): container finished" podID="7e69532a-3d9d-4796-b336-f696e943a116" containerID="0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790" exitCode=0 Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.992879 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fvzj8" Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.992911 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvzj8" event={"ID":"7e69532a-3d9d-4796-b336-f696e943a116","Type":"ContainerDied","Data":"0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790"} Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.992972 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvzj8" event={"ID":"7e69532a-3d9d-4796-b336-f696e943a116","Type":"ContainerDied","Data":"f15355a027484953bbec9ec41b75943b64ccd9f78318c57fa7f84da03f0c03e5"} Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.992994 5034 scope.go:117] "RemoveContainer" containerID="0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790" Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.995936 5034 generic.go:334] "Generic (PLEG): container finished" podID="171b4c86-ff76-4145-9324-c0c5a501e968" containerID="c848d8cc43e918f1830ecc176e77215c84e7ec172bf4cb09ccb4b241f3829099" exitCode=137 Jan 05 23:36:01 crc kubenswrapper[5034]: I0105 23:36:01.995959 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7966f5c6c6-ct6c7" event={"ID":"171b4c86-ff76-4145-9324-c0c5a501e968","Type":"ContainerDied","Data":"c848d8cc43e918f1830ecc176e77215c84e7ec172bf4cb09ccb4b241f3829099"} Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.021441 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fvzj8"] Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.026606 5034 scope.go:117] "RemoveContainer" containerID="5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.031585 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fvzj8"] Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.051515 5034 scope.go:117] "RemoveContainer" containerID="c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.073119 5034 scope.go:117] "RemoveContainer" containerID="0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790" Jan 05 23:36:02 crc kubenswrapper[5034]: E0105 23:36:02.073674 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790\": container with ID starting with 0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790 not found: ID does not exist" containerID="0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.073711 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790"} err="failed to get container status \"0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790\": rpc error: code = NotFound desc = could not find container \"0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790\": container with ID starting with 0a81db59bbbdabd9d4761b24cda79b63cd8e1ab7edd9059a6987488778002790 not found: ID does not exist" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.073736 5034 scope.go:117] "RemoveContainer" containerID="5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8" Jan 05 23:36:02 crc kubenswrapper[5034]: E0105 23:36:02.074087 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8\": container with ID starting with 5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8 not found: ID does not exist" containerID="5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.074115 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8"} err="failed to get container status \"5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8\": rpc error: code = NotFound desc = could not find container \"5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8\": container with ID starting with 5c965e5d8a9753448a72a89c58a2fa899b654d8c1baa6739d98750f3cc299ba8 not found: ID does not exist" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.074136 5034 scope.go:117] "RemoveContainer" containerID="c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558" Jan 05 23:36:02 crc kubenswrapper[5034]: E0105 23:36:02.074557 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558\": container with ID starting with c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558 not found: ID does not exist" containerID="c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.074608 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558"} err="failed to get container status \"c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558\": rpc error: code = NotFound desc = could not find container \"c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558\": container with ID starting with c4ee4bb5bfc042201ee1c3bc9ac48d8a75b3fdb1fd3fff8944910f1bc0d65558 not found: ID does not exist" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.639193 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.742208 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-combined-ca-bundle\") pod \"171b4c86-ff76-4145-9324-c0c5a501e968\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.742721 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-secret-key\") pod \"171b4c86-ff76-4145-9324-c0c5a501e968\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.743157 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/171b4c86-ff76-4145-9324-c0c5a501e968-logs\") pod \"171b4c86-ff76-4145-9324-c0c5a501e968\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.743233 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-scripts\") pod \"171b4c86-ff76-4145-9324-c0c5a501e968\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.743444 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-config-data\") pod \"171b4c86-ff76-4145-9324-c0c5a501e968\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.743925 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-tls-certs\") pod \"171b4c86-ff76-4145-9324-c0c5a501e968\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.743986 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/171b4c86-ff76-4145-9324-c0c5a501e968-logs" (OuterVolumeSpecName: "logs") pod "171b4c86-ff76-4145-9324-c0c5a501e968" (UID: "171b4c86-ff76-4145-9324-c0c5a501e968"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.744134 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttmbt\" (UniqueName: \"kubernetes.io/projected/171b4c86-ff76-4145-9324-c0c5a501e968-kube-api-access-ttmbt\") pod \"171b4c86-ff76-4145-9324-c0c5a501e968\" (UID: \"171b4c86-ff76-4145-9324-c0c5a501e968\") " Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.745903 5034 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/171b4c86-ff76-4145-9324-c0c5a501e968-logs\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.749884 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "171b4c86-ff76-4145-9324-c0c5a501e968" (UID: "171b4c86-ff76-4145-9324-c0c5a501e968"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.750670 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/171b4c86-ff76-4145-9324-c0c5a501e968-kube-api-access-ttmbt" (OuterVolumeSpecName: "kube-api-access-ttmbt") pod "171b4c86-ff76-4145-9324-c0c5a501e968" (UID: "171b4c86-ff76-4145-9324-c0c5a501e968"). InnerVolumeSpecName "kube-api-access-ttmbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.771339 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-scripts" (OuterVolumeSpecName: "scripts") pod "171b4c86-ff76-4145-9324-c0c5a501e968" (UID: "171b4c86-ff76-4145-9324-c0c5a501e968"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.772690 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-config-data" (OuterVolumeSpecName: "config-data") pod "171b4c86-ff76-4145-9324-c0c5a501e968" (UID: "171b4c86-ff76-4145-9324-c0c5a501e968"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.775746 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "171b4c86-ff76-4145-9324-c0c5a501e968" (UID: "171b4c86-ff76-4145-9324-c0c5a501e968"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.840139 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "171b4c86-ff76-4145-9324-c0c5a501e968" (UID: "171b4c86-ff76-4145-9324-c0c5a501e968"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.859507 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttmbt\" (UniqueName: \"kubernetes.io/projected/171b4c86-ff76-4145-9324-c0c5a501e968-kube-api-access-ttmbt\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.859566 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.859590 5034 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.859605 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.859620 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/171b4c86-ff76-4145-9324-c0c5a501e968-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:02 crc kubenswrapper[5034]: I0105 23:36:02.859633 5034 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/171b4c86-ff76-4145-9324-c0c5a501e968-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:03 crc kubenswrapper[5034]: I0105 23:36:03.009003 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7966f5c6c6-ct6c7" event={"ID":"171b4c86-ff76-4145-9324-c0c5a501e968","Type":"ContainerDied","Data":"1b0bb73385f515e3a26949867636f58e0f6a7308c48897b382c70f7a5787627c"} Jan 05 23:36:03 crc kubenswrapper[5034]: I0105 23:36:03.009109 5034 scope.go:117] "RemoveContainer" containerID="e411a3eab2b7ca31e9c58eb4c15d002ed9d5332f570afe081989f59929ee4331" Jan 05 23:36:03 crc kubenswrapper[5034]: I0105 23:36:03.009330 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7966f5c6c6-ct6c7" Jan 05 23:36:03 crc kubenswrapper[5034]: I0105 23:36:03.050448 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7966f5c6c6-ct6c7"] Jan 05 23:36:03 crc kubenswrapper[5034]: I0105 23:36:03.059857 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7966f5c6c6-ct6c7"] Jan 05 23:36:03 crc kubenswrapper[5034]: I0105 23:36:03.176396 5034 scope.go:117] "RemoveContainer" containerID="c848d8cc43e918f1830ecc176e77215c84e7ec172bf4cb09ccb4b241f3829099" Jan 05 23:36:03 crc kubenswrapper[5034]: I0105 23:36:03.858634 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" path="/var/lib/kubelet/pods/171b4c86-ff76-4145-9324-c0c5a501e968/volumes" Jan 05 23:36:03 crc kubenswrapper[5034]: I0105 23:36:03.860658 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e69532a-3d9d-4796-b336-f696e943a116" path="/var/lib/kubelet/pods/7e69532a-3d9d-4796-b336-f696e943a116/volumes" Jan 05 23:36:11 crc kubenswrapper[5034]: I0105 23:36:11.105413 5034 scope.go:117] "RemoveContainer" containerID="de18e02efd23ec84c46e0a51b0b21f6f738511f24b5c52c3a202d8c1d45cdb31" Jan 05 23:36:11 crc kubenswrapper[5034]: I0105 23:36:11.154257 5034 scope.go:117] "RemoveContainer" containerID="1bc5612160501759aabec8a6f7b00314fadab142cd741789586a71bd904cce9f" Jan 05 23:36:11 crc kubenswrapper[5034]: I0105 23:36:11.202997 5034 scope.go:117] "RemoveContainer" containerID="43a47e5f6cf830d2a52fc7d42e0f8b4c4dc0ddc7efaa62a8c0ae879a5c9ea103" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.764276 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922"] Jan 05 23:36:13 crc kubenswrapper[5034]: E0105 23:36:13.765562 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon-log" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.765585 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon-log" Jan 05 23:36:13 crc kubenswrapper[5034]: E0105 23:36:13.765658 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e69532a-3d9d-4796-b336-f696e943a116" containerName="extract-content" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.765672 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e69532a-3d9d-4796-b336-f696e943a116" containerName="extract-content" Jan 05 23:36:13 crc kubenswrapper[5034]: E0105 23:36:13.765703 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e69532a-3d9d-4796-b336-f696e943a116" containerName="registry-server" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.765716 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e69532a-3d9d-4796-b336-f696e943a116" containerName="registry-server" Jan 05 23:36:13 crc kubenswrapper[5034]: E0105 23:36:13.765744 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.765756 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon" Jan 05 23:36:13 crc kubenswrapper[5034]: E0105 23:36:13.765799 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="408a9063-dc19-4309-b9e9-a917f2db1b59" containerName="heat-engine" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.765810 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="408a9063-dc19-4309-b9e9-a917f2db1b59" containerName="heat-engine" Jan 05 23:36:13 crc kubenswrapper[5034]: E0105 23:36:13.765827 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e69532a-3d9d-4796-b336-f696e943a116" containerName="extract-utilities" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.765841 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e69532a-3d9d-4796-b336-f696e943a116" containerName="extract-utilities" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.766275 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.766301 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="171b4c86-ff76-4145-9324-c0c5a501e968" containerName="horizon-log" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.766323 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e69532a-3d9d-4796-b336-f696e943a116" containerName="registry-server" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.766343 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="408a9063-dc19-4309-b9e9-a917f2db1b59" containerName="heat-engine" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.769262 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.772228 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.783984 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922"] Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.848941 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.849053 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck88t\" (UniqueName: \"kubernetes.io/projected/737ad453-4b93-48b2-aa08-c3e69e6f81e7-kube-api-access-ck88t\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.849114 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.951834 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.951966 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck88t\" (UniqueName: \"kubernetes.io/projected/737ad453-4b93-48b2-aa08-c3e69e6f81e7-kube-api-access-ck88t\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.951999 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.952562 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.953214 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:13 crc kubenswrapper[5034]: I0105 23:36:13.974981 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck88t\" (UniqueName: \"kubernetes.io/projected/737ad453-4b93-48b2-aa08-c3e69e6f81e7-kube-api-access-ck88t\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:14 crc kubenswrapper[5034]: I0105 23:36:14.101110 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:14 crc kubenswrapper[5034]: I0105 23:36:14.631785 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922"] Jan 05 23:36:15 crc kubenswrapper[5034]: I0105 23:36:15.141860 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" event={"ID":"737ad453-4b93-48b2-aa08-c3e69e6f81e7","Type":"ContainerStarted","Data":"1ea927960037d22d124e5231545b2011fea65a9e9c6724705fd8276394853f5f"} Jan 05 23:36:15 crc kubenswrapper[5034]: I0105 23:36:15.142189 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" event={"ID":"737ad453-4b93-48b2-aa08-c3e69e6f81e7","Type":"ContainerStarted","Data":"a1aaf9326a86e1d1e5b837ae2ac2fbc54837fe1721184c98fe0f9aac538b264e"} Jan 05 23:36:16 crc kubenswrapper[5034]: I0105 23:36:16.154129 5034 generic.go:334] "Generic (PLEG): container finished" podID="737ad453-4b93-48b2-aa08-c3e69e6f81e7" containerID="1ea927960037d22d124e5231545b2011fea65a9e9c6724705fd8276394853f5f" exitCode=0 Jan 05 23:36:16 crc kubenswrapper[5034]: I0105 23:36:16.154567 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" event={"ID":"737ad453-4b93-48b2-aa08-c3e69e6f81e7","Type":"ContainerDied","Data":"1ea927960037d22d124e5231545b2011fea65a9e9c6724705fd8276394853f5f"} Jan 05 23:36:18 crc kubenswrapper[5034]: I0105 23:36:18.182405 5034 generic.go:334] "Generic (PLEG): container finished" podID="737ad453-4b93-48b2-aa08-c3e69e6f81e7" containerID="52ff4f1c808f6e6abff8f2c056bd0ab5410547b318a3df92538d40a602187706" exitCode=0 Jan 05 23:36:18 crc kubenswrapper[5034]: I0105 23:36:18.182678 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" event={"ID":"737ad453-4b93-48b2-aa08-c3e69e6f81e7","Type":"ContainerDied","Data":"52ff4f1c808f6e6abff8f2c056bd0ab5410547b318a3df92538d40a602187706"} Jan 05 23:36:19 crc kubenswrapper[5034]: I0105 23:36:19.205810 5034 generic.go:334] "Generic (PLEG): container finished" podID="737ad453-4b93-48b2-aa08-c3e69e6f81e7" containerID="bd80fca0f9cd832645a32f9d8fb70eb5ac1db9a0af16df1bafe60300e11eb913" exitCode=0 Jan 05 23:36:19 crc kubenswrapper[5034]: I0105 23:36:19.206361 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" event={"ID":"737ad453-4b93-48b2-aa08-c3e69e6f81e7","Type":"ContainerDied","Data":"bd80fca0f9cd832645a32f9d8fb70eb5ac1db9a0af16df1bafe60300e11eb913"} Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.471468 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.471926 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.697280 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.857500 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-bundle\") pod \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.857892 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck88t\" (UniqueName: \"kubernetes.io/projected/737ad453-4b93-48b2-aa08-c3e69e6f81e7-kube-api-access-ck88t\") pod \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.857965 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-util\") pod \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\" (UID: \"737ad453-4b93-48b2-aa08-c3e69e6f81e7\") " Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.860195 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-bundle" (OuterVolumeSpecName: "bundle") pod "737ad453-4b93-48b2-aa08-c3e69e6f81e7" (UID: "737ad453-4b93-48b2-aa08-c3e69e6f81e7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.863177 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/737ad453-4b93-48b2-aa08-c3e69e6f81e7-kube-api-access-ck88t" (OuterVolumeSpecName: "kube-api-access-ck88t") pod "737ad453-4b93-48b2-aa08-c3e69e6f81e7" (UID: "737ad453-4b93-48b2-aa08-c3e69e6f81e7"). InnerVolumeSpecName "kube-api-access-ck88t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.876163 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-util" (OuterVolumeSpecName: "util") pod "737ad453-4b93-48b2-aa08-c3e69e6f81e7" (UID: "737ad453-4b93-48b2-aa08-c3e69e6f81e7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.960555 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck88t\" (UniqueName: \"kubernetes.io/projected/737ad453-4b93-48b2-aa08-c3e69e6f81e7-kube-api-access-ck88t\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.960945 5034 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-util\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:20 crc kubenswrapper[5034]: I0105 23:36:20.960961 5034 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/737ad453-4b93-48b2-aa08-c3e69e6f81e7-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:36:21 crc kubenswrapper[5034]: I0105 23:36:21.232748 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" event={"ID":"737ad453-4b93-48b2-aa08-c3e69e6f81e7","Type":"ContainerDied","Data":"a1aaf9326a86e1d1e5b837ae2ac2fbc54837fe1721184c98fe0f9aac538b264e"} Jan 05 23:36:21 crc kubenswrapper[5034]: I0105 23:36:21.232807 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1aaf9326a86e1d1e5b837ae2ac2fbc54837fe1721184c98fe0f9aac538b264e" Jan 05 23:36:21 crc kubenswrapper[5034]: I0105 23:36:21.232831 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922" Jan 05 23:36:26 crc kubenswrapper[5034]: I0105 23:36:26.086298 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-k4s29"] Jan 05 23:36:26 crc kubenswrapper[5034]: I0105 23:36:26.098075 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6aa4-account-create-update-zfzvs"] Jan 05 23:36:26 crc kubenswrapper[5034]: I0105 23:36:26.112718 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6aa4-account-create-update-zfzvs"] Jan 05 23:36:26 crc kubenswrapper[5034]: I0105 23:36:26.122120 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-k4s29"] Jan 05 23:36:27 crc kubenswrapper[5034]: I0105 23:36:27.867823 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05bd4ed8-7e38-4469-b295-f475bf906342" path="/var/lib/kubelet/pods/05bd4ed8-7e38-4469-b295-f475bf906342/volumes" Jan 05 23:36:27 crc kubenswrapper[5034]: I0105 23:36:27.891575 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f6a9cba-014e-4c12-8c00-114299efe216" path="/var/lib/kubelet/pods/1f6a9cba-014e-4c12-8c00-114299efe216/volumes" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.370238 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj"] Jan 05 23:36:33 crc kubenswrapper[5034]: E0105 23:36:33.371841 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737ad453-4b93-48b2-aa08-c3e69e6f81e7" containerName="pull" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.371861 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="737ad453-4b93-48b2-aa08-c3e69e6f81e7" containerName="pull" Jan 05 23:36:33 crc kubenswrapper[5034]: E0105 23:36:33.371917 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737ad453-4b93-48b2-aa08-c3e69e6f81e7" containerName="extract" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.371927 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="737ad453-4b93-48b2-aa08-c3e69e6f81e7" containerName="extract" Jan 05 23:36:33 crc kubenswrapper[5034]: E0105 23:36:33.371954 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737ad453-4b93-48b2-aa08-c3e69e6f81e7" containerName="util" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.371961 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="737ad453-4b93-48b2-aa08-c3e69e6f81e7" containerName="util" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.381642 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="737ad453-4b93-48b2-aa08-c3e69e6f81e7" containerName="extract" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.386861 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.395068 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-29w2s" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.395488 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.397397 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.518451 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj"] Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.528673 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmrvf\" (UniqueName: \"kubernetes.io/projected/9ab76c0a-964d-4bed-a8d1-5fd30f83d707-kube-api-access-qmrvf\") pod \"obo-prometheus-operator-68bc856cb9-w7qpj\" (UID: \"9ab76c0a-964d-4bed-a8d1-5fd30f83d707\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.594200 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf"] Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.595847 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.604150 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.604448 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-wczpp" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.612678 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf"] Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.639869 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmrvf\" (UniqueName: \"kubernetes.io/projected/9ab76c0a-964d-4bed-a8d1-5fd30f83d707-kube-api-access-qmrvf\") pod \"obo-prometheus-operator-68bc856cb9-w7qpj\" (UID: \"9ab76c0a-964d-4bed-a8d1-5fd30f83d707\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.640185 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf2d5e61-2b14-473b-9d9d-082751280399-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf\" (UID: \"cf2d5e61-2b14-473b-9d9d-082751280399\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.640539 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf2d5e61-2b14-473b-9d9d-082751280399-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf\" (UID: \"cf2d5e61-2b14-473b-9d9d-082751280399\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.663525 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm"] Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.665428 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.685480 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmrvf\" (UniqueName: \"kubernetes.io/projected/9ab76c0a-964d-4bed-a8d1-5fd30f83d707-kube-api-access-qmrvf\") pod \"obo-prometheus-operator-68bc856cb9-w7qpj\" (UID: \"9ab76c0a-964d-4bed-a8d1-5fd30f83d707\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.723163 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm"] Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.744886 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fdec36c-ddb5-4cfb-8414-5464dd814235-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-86mxm\" (UID: \"2fdec36c-ddb5-4cfb-8414-5464dd814235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.744989 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fdec36c-ddb5-4cfb-8414-5464dd814235-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-86mxm\" (UID: \"2fdec36c-ddb5-4cfb-8414-5464dd814235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.745025 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf2d5e61-2b14-473b-9d9d-082751280399-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf\" (UID: \"cf2d5e61-2b14-473b-9d9d-082751280399\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.745132 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf2d5e61-2b14-473b-9d9d-082751280399-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf\" (UID: \"cf2d5e61-2b14-473b-9d9d-082751280399\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.746673 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.765910 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf2d5e61-2b14-473b-9d9d-082751280399-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf\" (UID: \"cf2d5e61-2b14-473b-9d9d-082751280399\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.788837 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf2d5e61-2b14-473b-9d9d-082751280399-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf\" (UID: \"cf2d5e61-2b14-473b-9d9d-082751280399\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.847361 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fdec36c-ddb5-4cfb-8414-5464dd814235-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-86mxm\" (UID: \"2fdec36c-ddb5-4cfb-8414-5464dd814235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.847733 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fdec36c-ddb5-4cfb-8414-5464dd814235-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-86mxm\" (UID: \"2fdec36c-ddb5-4cfb-8414-5464dd814235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.854890 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fdec36c-ddb5-4cfb-8414-5464dd814235-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-86mxm\" (UID: \"2fdec36c-ddb5-4cfb-8414-5464dd814235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.876612 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fdec36c-ddb5-4cfb-8414-5464dd814235-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-555df57bb9-86mxm\" (UID: \"2fdec36c-ddb5-4cfb-8414-5464dd814235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.914799 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4klvr"] Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.916474 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4klvr" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.924444 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.924751 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-hs9zr" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.936441 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.953772 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2075b65e-4db7-4345-8739-b9d2db8b4148-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4klvr\" (UID: \"2075b65e-4db7-4345-8739-b9d2db8b4148\") " pod="openshift-operators/observability-operator-59bdc8b94-4klvr" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.953844 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96bs5\" (UniqueName: \"kubernetes.io/projected/2075b65e-4db7-4345-8739-b9d2db8b4148-kube-api-access-96bs5\") pod \"observability-operator-59bdc8b94-4klvr\" (UID: \"2075b65e-4db7-4345-8739-b9d2db8b4148\") " pod="openshift-operators/observability-operator-59bdc8b94-4klvr" Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.966216 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4klvr"] Jan 05 23:36:33 crc kubenswrapper[5034]: I0105 23:36:33.988641 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.059160 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2075b65e-4db7-4345-8739-b9d2db8b4148-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4klvr\" (UID: \"2075b65e-4db7-4345-8739-b9d2db8b4148\") " pod="openshift-operators/observability-operator-59bdc8b94-4klvr" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.059250 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96bs5\" (UniqueName: \"kubernetes.io/projected/2075b65e-4db7-4345-8739-b9d2db8b4148-kube-api-access-96bs5\") pod \"observability-operator-59bdc8b94-4klvr\" (UID: \"2075b65e-4db7-4345-8739-b9d2db8b4148\") " pod="openshift-operators/observability-operator-59bdc8b94-4klvr" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.060852 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6d7gn"] Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.063825 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.073676 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-f4db7" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.090395 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96bs5\" (UniqueName: \"kubernetes.io/projected/2075b65e-4db7-4345-8739-b9d2db8b4148-kube-api-access-96bs5\") pod \"observability-operator-59bdc8b94-4klvr\" (UID: \"2075b65e-4db7-4345-8739-b9d2db8b4148\") " pod="openshift-operators/observability-operator-59bdc8b94-4klvr" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.099842 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2075b65e-4db7-4345-8739-b9d2db8b4148-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4klvr\" (UID: \"2075b65e-4db7-4345-8739-b9d2db8b4148\") " pod="openshift-operators/observability-operator-59bdc8b94-4klvr" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.147229 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6d7gn"] Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.163747 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/55cecb6b-56b7-4d48-a9a4-a5b2f74e164c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6d7gn\" (UID: \"55cecb6b-56b7-4d48-a9a4-a5b2f74e164c\") " pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.163831 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9hx\" (UniqueName: \"kubernetes.io/projected/55cecb6b-56b7-4d48-a9a4-a5b2f74e164c-kube-api-access-jt9hx\") pod \"perses-operator-5bf474d74f-6d7gn\" (UID: \"55cecb6b-56b7-4d48-a9a4-a5b2f74e164c\") " pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.272500 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/55cecb6b-56b7-4d48-a9a4-a5b2f74e164c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6d7gn\" (UID: \"55cecb6b-56b7-4d48-a9a4-a5b2f74e164c\") " pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.272558 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt9hx\" (UniqueName: \"kubernetes.io/projected/55cecb6b-56b7-4d48-a9a4-a5b2f74e164c-kube-api-access-jt9hx\") pod \"perses-operator-5bf474d74f-6d7gn\" (UID: \"55cecb6b-56b7-4d48-a9a4-a5b2f74e164c\") " pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.273841 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/55cecb6b-56b7-4d48-a9a4-a5b2f74e164c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6d7gn\" (UID: \"55cecb6b-56b7-4d48-a9a4-a5b2f74e164c\") " pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.299750 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4klvr" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.312903 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt9hx\" (UniqueName: \"kubernetes.io/projected/55cecb6b-56b7-4d48-a9a4-a5b2f74e164c-kube-api-access-jt9hx\") pod \"perses-operator-5bf474d74f-6d7gn\" (UID: \"55cecb6b-56b7-4d48-a9a4-a5b2f74e164c\") " pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.399380 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.672165 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj"] Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.849131 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf"] Jan 05 23:36:34 crc kubenswrapper[5034]: W0105 23:36:34.857837 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2d5e61_2b14_473b_9d9d_082751280399.slice/crio-d5de90ec51fc2dd3534d78c8f8944b164cbbd075d38fc88d6d47567d9b087fdc WatchSource:0}: Error finding container d5de90ec51fc2dd3534d78c8f8944b164cbbd075d38fc88d6d47567d9b087fdc: Status 404 returned error can't find the container with id d5de90ec51fc2dd3534d78c8f8944b164cbbd075d38fc88d6d47567d9b087fdc Jan 05 23:36:34 crc kubenswrapper[5034]: I0105 23:36:34.993605 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm"] Jan 05 23:36:35 crc kubenswrapper[5034]: W0105 23:36:35.081789 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2075b65e_4db7_4345_8739_b9d2db8b4148.slice/crio-8d7fbc0f3037453bd34890cda9bae679340f09a270342e8c123c854b2c3c6110 WatchSource:0}: Error finding container 8d7fbc0f3037453bd34890cda9bae679340f09a270342e8c123c854b2c3c6110: Status 404 returned error can't find the container with id 8d7fbc0f3037453bd34890cda9bae679340f09a270342e8c123c854b2c3c6110 Jan 05 23:36:35 crc kubenswrapper[5034]: I0105 23:36:35.086850 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4klvr"] Jan 05 23:36:35 crc kubenswrapper[5034]: I0105 23:36:35.107128 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-69ktk"] Jan 05 23:36:35 crc kubenswrapper[5034]: I0105 23:36:35.124236 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-69ktk"] Jan 05 23:36:35 crc kubenswrapper[5034]: I0105 23:36:35.246198 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6d7gn"] Jan 05 23:36:35 crc kubenswrapper[5034]: W0105 23:36:35.263198 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55cecb6b_56b7_4d48_a9a4_a5b2f74e164c.slice/crio-7c84b29fe217d3025a2fd0480629de3d5aa592db236b3ce91a75595def111915 WatchSource:0}: Error finding container 7c84b29fe217d3025a2fd0480629de3d5aa592db236b3ce91a75595def111915: Status 404 returned error can't find the container with id 7c84b29fe217d3025a2fd0480629de3d5aa592db236b3ce91a75595def111915 Jan 05 23:36:35 crc kubenswrapper[5034]: I0105 23:36:35.517393 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" event={"ID":"cf2d5e61-2b14-473b-9d9d-082751280399","Type":"ContainerStarted","Data":"d5de90ec51fc2dd3534d78c8f8944b164cbbd075d38fc88d6d47567d9b087fdc"} Jan 05 23:36:35 crc kubenswrapper[5034]: I0105 23:36:35.518797 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" event={"ID":"55cecb6b-56b7-4d48-a9a4-a5b2f74e164c","Type":"ContainerStarted","Data":"7c84b29fe217d3025a2fd0480629de3d5aa592db236b3ce91a75595def111915"} Jan 05 23:36:35 crc kubenswrapper[5034]: I0105 23:36:35.519953 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4klvr" event={"ID":"2075b65e-4db7-4345-8739-b9d2db8b4148","Type":"ContainerStarted","Data":"8d7fbc0f3037453bd34890cda9bae679340f09a270342e8c123c854b2c3c6110"} Jan 05 23:36:35 crc kubenswrapper[5034]: I0105 23:36:35.521129 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" event={"ID":"2fdec36c-ddb5-4cfb-8414-5464dd814235","Type":"ContainerStarted","Data":"5b7be283601922bf5630799541bd5db5a82c45642b8570be35a5bed1cd69b71e"} Jan 05 23:36:35 crc kubenswrapper[5034]: I0105 23:36:35.522235 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj" event={"ID":"9ab76c0a-964d-4bed-a8d1-5fd30f83d707","Type":"ContainerStarted","Data":"fb26102b0aed5ecfa8cfd1be35356b8aae97ca530e67889c85a3dcf6cbec5ab4"} Jan 05 23:36:35 crc kubenswrapper[5034]: I0105 23:36:35.891287 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b340388-a44b-4ce6-8a42-6b835309583d" path="/var/lib/kubelet/pods/9b340388-a44b-4ce6-8a42-6b835309583d/volumes" Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.662679 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" event={"ID":"cf2d5e61-2b14-473b-9d9d-082751280399","Type":"ContainerStarted","Data":"f4d5a264bb73f6c5990b5772b17fd34c306fb4b4b06d300c5956e55ea3f55b1c"} Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.665825 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" event={"ID":"55cecb6b-56b7-4d48-a9a4-a5b2f74e164c","Type":"ContainerStarted","Data":"c3023c3322dcdc8f3cc42eb785ee72ca1aff0da6ae73ce1119eb78d1c800841e"} Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.666567 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.669043 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4klvr" event={"ID":"2075b65e-4db7-4345-8739-b9d2db8b4148","Type":"ContainerStarted","Data":"f676d8cfb6e44274266e199d6360360b36ea01aaeece1cb16a25f9d4d96fd0b2"} Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.669450 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-4klvr" Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.671256 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" event={"ID":"2fdec36c-ddb5-4cfb-8414-5464dd814235","Type":"ContainerStarted","Data":"d2145db2e4d79f9b2968d31762fe1536236c1e4c308365f26325f7b872b72f54"} Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.673268 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj" event={"ID":"9ab76c0a-964d-4bed-a8d1-5fd30f83d707","Type":"ContainerStarted","Data":"2c0db81cb80884d5cedf38ab0e2310689ba99f60909a29ed8537171a6cd83960"} Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.677951 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-4klvr" Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.691280 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf" podStartSLOduration=3.212730516 podStartE2EDuration="11.691250765s" podCreationTimestamp="2026-01-05 23:36:33 +0000 UTC" firstStartedPulling="2026-01-05 23:36:34.877813717 +0000 UTC m=+6287.249813156" lastFinishedPulling="2026-01-05 23:36:43.356333966 +0000 UTC m=+6295.728333405" observedRunningTime="2026-01-05 23:36:44.68055147 +0000 UTC m=+6297.052550939" watchObservedRunningTime="2026-01-05 23:36:44.691250765 +0000 UTC m=+6297.063250204" Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.721738 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-555df57bb9-86mxm" podStartSLOduration=3.452333398 podStartE2EDuration="11.721709684s" podCreationTimestamp="2026-01-05 23:36:33 +0000 UTC" firstStartedPulling="2026-01-05 23:36:35.007277648 +0000 UTC m=+6287.379277087" lastFinishedPulling="2026-01-05 23:36:43.276653934 +0000 UTC m=+6295.648653373" observedRunningTime="2026-01-05 23:36:44.711165403 +0000 UTC m=+6297.083164862" watchObservedRunningTime="2026-01-05 23:36:44.721709684 +0000 UTC m=+6297.093709123" Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.773986 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-4klvr" podStartSLOduration=3.437722042 podStartE2EDuration="11.773960803s" podCreationTimestamp="2026-01-05 23:36:33 +0000 UTC" firstStartedPulling="2026-01-05 23:36:35.093214989 +0000 UTC m=+6287.465214438" lastFinishedPulling="2026-01-05 23:36:43.42945376 +0000 UTC m=+6295.801453199" observedRunningTime="2026-01-05 23:36:44.770574117 +0000 UTC m=+6297.142573556" watchObservedRunningTime="2026-01-05 23:36:44.773960803 +0000 UTC m=+6297.145960242" Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.807455 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7qpj" podStartSLOduration=3.128407402 podStartE2EDuration="11.807425947s" podCreationTimestamp="2026-01-05 23:36:33 +0000 UTC" firstStartedPulling="2026-01-05 23:36:34.676336193 +0000 UTC m=+6287.048335632" lastFinishedPulling="2026-01-05 23:36:43.355354738 +0000 UTC m=+6295.727354177" observedRunningTime="2026-01-05 23:36:44.797746942 +0000 UTC m=+6297.169746381" watchObservedRunningTime="2026-01-05 23:36:44.807425947 +0000 UTC m=+6297.179425386" Jan 05 23:36:44 crc kubenswrapper[5034]: I0105 23:36:44.848792 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" podStartSLOduration=3.838206988 podStartE2EDuration="11.848766655s" podCreationTimestamp="2026-01-05 23:36:33 +0000 UTC" firstStartedPulling="2026-01-05 23:36:35.266019175 +0000 UTC m=+6287.638018614" lastFinishedPulling="2026-01-05 23:36:43.276578842 +0000 UTC m=+6295.648578281" observedRunningTime="2026-01-05 23:36:44.837517585 +0000 UTC m=+6297.209517024" watchObservedRunningTime="2026-01-05 23:36:44.848766655 +0000 UTC m=+6297.220766094" Jan 05 23:36:50 crc kubenswrapper[5034]: I0105 23:36:50.469122 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:36:50 crc kubenswrapper[5034]: I0105 23:36:50.469744 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:36:54 crc kubenswrapper[5034]: I0105 23:36:54.405842 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-6d7gn" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.420823 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.422384 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="6c9592dc-4604-488b-912a-b3ab5e11fd3b" containerName="openstackclient" containerID="cri-o://45f0c24ea020f76522361b85eb4fb2f887f4666d1ae2898da6078be02686deb6" gracePeriod=2 Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.447053 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.544462 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 05 23:36:57 crc kubenswrapper[5034]: E0105 23:36:57.545127 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c9592dc-4604-488b-912a-b3ab5e11fd3b" containerName="openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.545146 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c9592dc-4604-488b-912a-b3ab5e11fd3b" containerName="openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.545395 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c9592dc-4604-488b-912a-b3ab5e11fd3b" containerName="openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.546286 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.560747 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6c9592dc-4604-488b-912a-b3ab5e11fd3b" podUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.613327 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.613380 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.613411 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnkm8\" (UniqueName: \"kubernetes.io/projected/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-kube-api-access-nnkm8\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.613589 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config-secret\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.622585 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.698761 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.700137 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.715508 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.715561 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.715602 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnkm8\" (UniqueName: \"kubernetes.io/projected/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-kube-api-access-nnkm8\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.715657 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vhdpn" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.715754 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config-secret\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.717700 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.739804 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.749600 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config-secret\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.796549 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.817986 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb8bf\" (UniqueName: \"kubernetes.io/projected/9a16889f-0260-4fcc-8567-81a3da64667d-kube-api-access-qb8bf\") pod \"kube-state-metrics-0\" (UID: \"9a16889f-0260-4fcc-8567-81a3da64667d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.831818 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnkm8\" (UniqueName: \"kubernetes.io/projected/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-kube-api-access-nnkm8\") pod \"openstackclient\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.920231 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.936021 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb8bf\" (UniqueName: \"kubernetes.io/projected/9a16889f-0260-4fcc-8567-81a3da64667d-kube-api-access-qb8bf\") pod \"kube-state-metrics-0\" (UID: \"9a16889f-0260-4fcc-8567-81a3da64667d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:36:57 crc kubenswrapper[5034]: I0105 23:36:57.984914 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb8bf\" (UniqueName: \"kubernetes.io/projected/9a16889f-0260-4fcc-8567-81a3da64667d-kube-api-access-qb8bf\") pod \"kube-state-metrics-0\" (UID: \"9a16889f-0260-4fcc-8567-81a3da64667d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.025733 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.816828 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.820021 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.822858 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.823186 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.823407 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.825608 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-g74lk" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.825901 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.857929 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.987155 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8l2w\" (UniqueName: \"kubernetes.io/projected/32d9b801-27e9-4825-a130-6531d245e769-kube-api-access-f8l2w\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.987232 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32d9b801-27e9-4825-a130-6531d245e769-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.987305 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/32d9b801-27e9-4825-a130-6531d245e769-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.987327 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32d9b801-27e9-4825-a130-6531d245e769-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.987492 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/32d9b801-27e9-4825-a130-6531d245e769-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.987541 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/32d9b801-27e9-4825-a130-6531d245e769-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:58 crc kubenswrapper[5034]: I0105 23:36:58.987586 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32d9b801-27e9-4825-a130-6531d245e769-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.046860 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.089681 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/32d9b801-27e9-4825-a130-6531d245e769-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.089738 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/32d9b801-27e9-4825-a130-6531d245e769-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.089783 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32d9b801-27e9-4825-a130-6531d245e769-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.089830 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8l2w\" (UniqueName: \"kubernetes.io/projected/32d9b801-27e9-4825-a130-6531d245e769-kube-api-access-f8l2w\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.089859 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32d9b801-27e9-4825-a130-6531d245e769-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.089904 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/32d9b801-27e9-4825-a130-6531d245e769-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.089929 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32d9b801-27e9-4825-a130-6531d245e769-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.093505 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/32d9b801-27e9-4825-a130-6531d245e769-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.111720 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32d9b801-27e9-4825-a130-6531d245e769-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.117897 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/32d9b801-27e9-4825-a130-6531d245e769-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.131313 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32d9b801-27e9-4825-a130-6531d245e769-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.146028 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/32d9b801-27e9-4825-a130-6531d245e769-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.146639 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8l2w\" (UniqueName: \"kubernetes.io/projected/32d9b801-27e9-4825-a130-6531d245e769-kube-api-access-f8l2w\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.173023 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32d9b801-27e9-4825-a130-6531d245e769-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"32d9b801-27e9-4825-a130-6531d245e769\") " pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.203486 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.297439 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 23:36:59 crc kubenswrapper[5034]: W0105 23:36:59.344220 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a16889f_0260_4fcc_8567_81a3da64667d.slice/crio-3d1df2c93d71dd02ea6c9c15998c80d17938f75578417b68ad051c739fbed755 WatchSource:0}: Error finding container 3d1df2c93d71dd02ea6c9c15998c80d17938f75578417b68ad051c739fbed755: Status 404 returned error can't find the container with id 3d1df2c93d71dd02ea6c9c15998c80d17938f75578417b68ad051c739fbed755 Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.471230 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.475741 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.484634 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.484781 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.485031 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.485193 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.485275 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.485342 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.485355 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pwjz8" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.485436 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.503386 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.630056 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.630275 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.630314 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8276s\" (UniqueName: \"kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-kube-api-access-8276s\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.630473 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.630806 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/73aca9b8-f04f-4490-a49d-128f8c9686c8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.631038 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.631118 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.631157 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.631269 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.631292 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-config\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.733673 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/73aca9b8-f04f-4490-a49d-128f8c9686c8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.733988 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.734020 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.734044 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.734096 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.734117 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-config\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.734151 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.734177 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.734196 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8276s\" (UniqueName: \"kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-kube-api-access-8276s\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.734247 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.735190 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.737548 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.740054 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.743142 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/73aca9b8-f04f-4490-a49d-128f8c9686c8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.743502 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-config\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.744602 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.745008 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.745036 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2f416067a4e274c58975fa574303358506362fcca46365bf62044b89d6a3e9d4/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.749629 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.759966 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.775466 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8276s\" (UniqueName: \"kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-kube-api-access-8276s\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.832978 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") pod \"prometheus-metric-storage-0\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.896726 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db","Type":"ContainerStarted","Data":"9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce"} Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.896788 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db","Type":"ContainerStarted","Data":"0039b515a013778fd713b39b8f8f846977094997e444acbc2d2ab3e7e5dd15e3"} Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.912977 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9a16889f-0260-4fcc-8567-81a3da64667d","Type":"ContainerStarted","Data":"3d1df2c93d71dd02ea6c9c15998c80d17938f75578417b68ad051c739fbed755"} Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.920244 5034 generic.go:334] "Generic (PLEG): container finished" podID="6c9592dc-4604-488b-912a-b3ab5e11fd3b" containerID="45f0c24ea020f76522361b85eb4fb2f887f4666d1ae2898da6078be02686deb6" exitCode=137 Jan 05 23:36:59 crc kubenswrapper[5034]: I0105 23:36:59.935977 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.935949451 podStartE2EDuration="2.935949451s" podCreationTimestamp="2026-01-05 23:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:36:59.921288313 +0000 UTC m=+6312.293287772" watchObservedRunningTime="2026-01-05 23:36:59.935949451 +0000 UTC m=+6312.307948890" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.026717 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.133121 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.213628 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.217983 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6c9592dc-4604-488b-912a-b3ab5e11fd3b" podUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.401211 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config\") pod \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.401646 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config-secret\") pod \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.401702 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-combined-ca-bundle\") pod \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.401978 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swhws\" (UniqueName: \"kubernetes.io/projected/6c9592dc-4604-488b-912a-b3ab5e11fd3b-kube-api-access-swhws\") pod \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\" (UID: \"6c9592dc-4604-488b-912a-b3ab5e11fd3b\") " Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.416894 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9592dc-4604-488b-912a-b3ab5e11fd3b-kube-api-access-swhws" (OuterVolumeSpecName: "kube-api-access-swhws") pod "6c9592dc-4604-488b-912a-b3ab5e11fd3b" (UID: "6c9592dc-4604-488b-912a-b3ab5e11fd3b"). InnerVolumeSpecName "kube-api-access-swhws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.447036 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6c9592dc-4604-488b-912a-b3ab5e11fd3b" (UID: "6c9592dc-4604-488b-912a-b3ab5e11fd3b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.463204 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c9592dc-4604-488b-912a-b3ab5e11fd3b" (UID: "6c9592dc-4604-488b-912a-b3ab5e11fd3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.504846 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.504888 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swhws\" (UniqueName: \"kubernetes.io/projected/6c9592dc-4604-488b-912a-b3ab5e11fd3b-kube-api-access-swhws\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.504900 5034 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.516498 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6c9592dc-4604-488b-912a-b3ab5e11fd3b" (UID: "6c9592dc-4604-488b-912a-b3ab5e11fd3b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.607389 5034 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c9592dc-4604-488b-912a-b3ab5e11fd3b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.747322 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 23:37:00 crc kubenswrapper[5034]: W0105 23:37:00.747994 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73aca9b8_f04f_4490_a49d_128f8c9686c8.slice/crio-fe9e4fb6acedd7e97546e2d44be61f7bda7aecb7642c608a2673de74605630cd WatchSource:0}: Error finding container fe9e4fb6acedd7e97546e2d44be61f7bda7aecb7642c608a2673de74605630cd: Status 404 returned error can't find the container with id fe9e4fb6acedd7e97546e2d44be61f7bda7aecb7642c608a2673de74605630cd Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.934920 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9a16889f-0260-4fcc-8567-81a3da64667d","Type":"ContainerStarted","Data":"a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690"} Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.935021 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.937272 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"73aca9b8-f04f-4490-a49d-128f8c9686c8","Type":"ContainerStarted","Data":"fe9e4fb6acedd7e97546e2d44be61f7bda7aecb7642c608a2673de74605630cd"} Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.938879 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"32d9b801-27e9-4825-a130-6531d245e769","Type":"ContainerStarted","Data":"f5e6b04f48d4b2bd7e64254f37dd4e0743fe067eb0a1b5bfc553a5f27ea8c508"} Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.940261 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.940310 5034 scope.go:117] "RemoveContainer" containerID="45f0c24ea020f76522361b85eb4fb2f887f4666d1ae2898da6078be02686deb6" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.958716 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6c9592dc-4604-488b-912a-b3ab5e11fd3b" podUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.958691 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.431656444 podStartE2EDuration="3.958658379s" podCreationTimestamp="2026-01-05 23:36:57 +0000 UTC" firstStartedPulling="2026-01-05 23:36:59.389006207 +0000 UTC m=+6311.761005646" lastFinishedPulling="2026-01-05 23:36:59.916008142 +0000 UTC m=+6312.288007581" observedRunningTime="2026-01-05 23:37:00.955224331 +0000 UTC m=+6313.327223780" watchObservedRunningTime="2026-01-05 23:37:00.958658379 +0000 UTC m=+6313.330657818" Jan 05 23:37:00 crc kubenswrapper[5034]: I0105 23:37:00.975304 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6c9592dc-4604-488b-912a-b3ab5e11fd3b" podUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" Jan 05 23:37:01 crc kubenswrapper[5034]: I0105 23:37:01.857479 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c9592dc-4604-488b-912a-b3ab5e11fd3b" path="/var/lib/kubelet/pods/6c9592dc-4604-488b-912a-b3ab5e11fd3b/volumes" Jan 05 23:37:06 crc kubenswrapper[5034]: I0105 23:37:06.028654 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-sjpgk"] Jan 05 23:37:06 crc kubenswrapper[5034]: I0105 23:37:06.043245 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-833e-account-create-update-skxfj"] Jan 05 23:37:06 crc kubenswrapper[5034]: I0105 23:37:06.054230 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-833e-account-create-update-skxfj"] Jan 05 23:37:06 crc kubenswrapper[5034]: I0105 23:37:06.066178 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-sjpgk"] Jan 05 23:37:07 crc kubenswrapper[5034]: I0105 23:37:07.013511 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"73aca9b8-f04f-4490-a49d-128f8c9686c8","Type":"ContainerStarted","Data":"5548450e6c0c193e74140977d4565679da0011906488c6b5227fe473596021de"} Jan 05 23:37:07 crc kubenswrapper[5034]: I0105 23:37:07.015628 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"32d9b801-27e9-4825-a130-6531d245e769","Type":"ContainerStarted","Data":"98cdd22c250531a2125a3c6cdc3ab2082dda58d40d0b229798df577c6318939b"} Jan 05 23:37:07 crc kubenswrapper[5034]: I0105 23:37:07.851874 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70255284-3c60-4afa-afae-d8e33b86d065" path="/var/lib/kubelet/pods/70255284-3c60-4afa-afae-d8e33b86d065/volumes" Jan 05 23:37:07 crc kubenswrapper[5034]: I0105 23:37:07.852913 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97147617-d53b-41e7-871a-218080186366" path="/var/lib/kubelet/pods/97147617-d53b-41e7-871a-218080186366/volumes" Jan 05 23:37:08 crc kubenswrapper[5034]: I0105 23:37:08.034849 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 05 23:37:11 crc kubenswrapper[5034]: I0105 23:37:11.503134 5034 scope.go:117] "RemoveContainer" containerID="0b9af151b6bda7bb0a4091f8cda4574e9b6c881ebb0ed1a95f7cc8a61ce5dc48" Jan 05 23:37:11 crc kubenswrapper[5034]: I0105 23:37:11.528804 5034 scope.go:117] "RemoveContainer" containerID="25212514badf496f706007c603486cf621e4055705ad7924ef723537cdfaf38f" Jan 05 23:37:11 crc kubenswrapper[5034]: I0105 23:37:11.589244 5034 scope.go:117] "RemoveContainer" containerID="af0c1a41334f2803ecff6ba52350d932e39a2ca2c9936ca0ab7f2434ce1b9699" Jan 05 23:37:11 crc kubenswrapper[5034]: I0105 23:37:11.636529 5034 scope.go:117] "RemoveContainer" containerID="dc20f1ffc2649b535c454a4fafa1cb891ad9004869dd5ec74de29a803c43b5d4" Jan 05 23:37:11 crc kubenswrapper[5034]: I0105 23:37:11.716975 5034 scope.go:117] "RemoveContainer" containerID="543753a3b743f25eb204cf68013c2f230dd0958351bb3b097bce31fe1913718e" Jan 05 23:37:13 crc kubenswrapper[5034]: I0105 23:37:13.036413 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ztkxg"] Jan 05 23:37:13 crc kubenswrapper[5034]: I0105 23:37:13.049486 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ztkxg"] Jan 05 23:37:13 crc kubenswrapper[5034]: I0105 23:37:13.850758 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b217503b-88f8-4c9c-a62a-98fa4c2708fa" path="/var/lib/kubelet/pods/b217503b-88f8-4c9c-a62a-98fa4c2708fa/volumes" Jan 05 23:37:14 crc kubenswrapper[5034]: I0105 23:37:14.084880 5034 generic.go:334] "Generic (PLEG): container finished" podID="32d9b801-27e9-4825-a130-6531d245e769" containerID="98cdd22c250531a2125a3c6cdc3ab2082dda58d40d0b229798df577c6318939b" exitCode=0 Jan 05 23:37:14 crc kubenswrapper[5034]: I0105 23:37:14.084956 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"32d9b801-27e9-4825-a130-6531d245e769","Type":"ContainerDied","Data":"98cdd22c250531a2125a3c6cdc3ab2082dda58d40d0b229798df577c6318939b"} Jan 05 23:37:14 crc kubenswrapper[5034]: I0105 23:37:14.089703 5034 generic.go:334] "Generic (PLEG): container finished" podID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerID="5548450e6c0c193e74140977d4565679da0011906488c6b5227fe473596021de" exitCode=0 Jan 05 23:37:14 crc kubenswrapper[5034]: I0105 23:37:14.089861 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"73aca9b8-f04f-4490-a49d-128f8c9686c8","Type":"ContainerDied","Data":"5548450e6c0c193e74140977d4565679da0011906488c6b5227fe473596021de"} Jan 05 23:37:17 crc kubenswrapper[5034]: I0105 23:37:17.134181 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"32d9b801-27e9-4825-a130-6531d245e769","Type":"ContainerStarted","Data":"a7bbb2b9a644d949dc8a510a55a183156d020a734a9c4147ddcbb642bc867ad3"} Jan 05 23:37:20 crc kubenswrapper[5034]: I0105 23:37:20.469591 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:37:20 crc kubenswrapper[5034]: I0105 23:37:20.469938 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:37:20 crc kubenswrapper[5034]: I0105 23:37:20.469997 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 23:37:20 crc kubenswrapper[5034]: I0105 23:37:20.470997 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 23:37:20 crc kubenswrapper[5034]: I0105 23:37:20.471066 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" gracePeriod=600 Jan 05 23:37:21 crc kubenswrapper[5034]: E0105 23:37:21.597869 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:37:22 crc kubenswrapper[5034]: I0105 23:37:22.197195 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"32d9b801-27e9-4825-a130-6531d245e769","Type":"ContainerStarted","Data":"85ff6ea312388c906004fd8b81f305689945efac9faf23756cf89a8b51538b9b"} Jan 05 23:37:22 crc kubenswrapper[5034]: I0105 23:37:22.197560 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 05 23:37:22 crc kubenswrapper[5034]: I0105 23:37:22.202903 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 05 23:37:22 crc kubenswrapper[5034]: I0105 23:37:22.203890 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"73aca9b8-f04f-4490-a49d-128f8c9686c8","Type":"ContainerStarted","Data":"e1e6f1996a116d86fc2dc2f5c385684aeaec0ef3e01427bdb723d70eccd8b282"} Jan 05 23:37:22 crc kubenswrapper[5034]: I0105 23:37:22.206361 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" exitCode=0 Jan 05 23:37:22 crc kubenswrapper[5034]: I0105 23:37:22.206417 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16"} Jan 05 23:37:22 crc kubenswrapper[5034]: I0105 23:37:22.206468 5034 scope.go:117] "RemoveContainer" containerID="a88a5134ff25bff3380251394560c9cbca0838a1161bcde80ce38bf8d4b764a1" Jan 05 23:37:22 crc kubenswrapper[5034]: I0105 23:37:22.207117 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:37:22 crc kubenswrapper[5034]: E0105 23:37:22.207424 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:37:22 crc kubenswrapper[5034]: I0105 23:37:22.230507 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=7.732298149 podStartE2EDuration="24.230471985s" podCreationTimestamp="2026-01-05 23:36:58 +0000 UTC" firstStartedPulling="2026-01-05 23:37:00.018926407 +0000 UTC m=+6312.390925846" lastFinishedPulling="2026-01-05 23:37:16.517100233 +0000 UTC m=+6328.889099682" observedRunningTime="2026-01-05 23:37:22.221966613 +0000 UTC m=+6334.593966052" watchObservedRunningTime="2026-01-05 23:37:22.230471985 +0000 UTC m=+6334.602471424" Jan 05 23:37:25 crc kubenswrapper[5034]: I0105 23:37:25.600249 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 23:37:26 crc kubenswrapper[5034]: I0105 23:37:26.267276 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"73aca9b8-f04f-4490-a49d-128f8c9686c8","Type":"ContainerStarted","Data":"4a72a8bbb105a1ca5f931fbc8643a5dbeec356e909b2138330882e070712f82f"} Jan 05 23:37:30 crc kubenswrapper[5034]: I0105 23:37:30.318383 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"73aca9b8-f04f-4490-a49d-128f8c9686c8","Type":"ContainerStarted","Data":"0b5f7010e5b351b961562333aa1d4b67a1ed89801f383a91d145aae35a129339"} Jan 05 23:37:32 crc kubenswrapper[5034]: I0105 23:37:32.838668 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:37:32 crc kubenswrapper[5034]: E0105 23:37:32.839343 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:37:35 crc kubenswrapper[5034]: I0105 23:37:35.134798 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.213321 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=13.709462471 podStartE2EDuration="42.213293197s" podCreationTimestamp="2026-01-05 23:36:58 +0000 UTC" firstStartedPulling="2026-01-05 23:37:00.751233525 +0000 UTC m=+6313.123232964" lastFinishedPulling="2026-01-05 23:37:29.255064251 +0000 UTC m=+6341.627063690" observedRunningTime="2026-01-05 23:37:30.345767118 +0000 UTC m=+6342.717766557" watchObservedRunningTime="2026-01-05 23:37:40.213293197 +0000 UTC m=+6352.585292636" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.217435 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.220141 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.224693 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.224991 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.230401 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.305930 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.306003 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-scripts\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.306026 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-config-data\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.306108 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-log-httpd\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.306180 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.306224 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-run-httpd\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.306279 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5bwr\" (UniqueName: \"kubernetes.io/projected/1d0f7073-d292-4943-8245-22b8485cfee9-kube-api-access-j5bwr\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.408291 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-log-httpd\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.408723 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.408771 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-run-httpd\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.408848 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5bwr\" (UniqueName: \"kubernetes.io/projected/1d0f7073-d292-4943-8245-22b8485cfee9-kube-api-access-j5bwr\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.408932 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.408939 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-log-httpd\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.408980 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-scripts\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.409002 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-config-data\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.409291 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-run-httpd\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.447884 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.448545 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.449185 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-config-data\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.450206 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-scripts\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.457366 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5bwr\" (UniqueName: \"kubernetes.io/projected/1d0f7073-d292-4943-8245-22b8485cfee9-kube-api-access-j5bwr\") pod \"ceilometer-0\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " pod="openstack/ceilometer-0" Jan 05 23:37:40 crc kubenswrapper[5034]: I0105 23:37:40.543185 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 23:37:41 crc kubenswrapper[5034]: I0105 23:37:41.126772 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:37:41 crc kubenswrapper[5034]: W0105 23:37:41.129702 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d0f7073_d292_4943_8245_22b8485cfee9.slice/crio-b870326998f5178b34f171c8f37233ede4069661e0c06fe93e461ad9b752b4a6 WatchSource:0}: Error finding container b870326998f5178b34f171c8f37233ede4069661e0c06fe93e461ad9b752b4a6: Status 404 returned error can't find the container with id b870326998f5178b34f171c8f37233ede4069661e0c06fe93e461ad9b752b4a6 Jan 05 23:37:41 crc kubenswrapper[5034]: I0105 23:37:41.418911 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d0f7073-d292-4943-8245-22b8485cfee9","Type":"ContainerStarted","Data":"b870326998f5178b34f171c8f37233ede4069661e0c06fe93e461ad9b752b4a6"} Jan 05 23:37:42 crc kubenswrapper[5034]: I0105 23:37:42.433529 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d0f7073-d292-4943-8245-22b8485cfee9","Type":"ContainerStarted","Data":"7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0"} Jan 05 23:37:43 crc kubenswrapper[5034]: I0105 23:37:43.457851 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d0f7073-d292-4943-8245-22b8485cfee9","Type":"ContainerStarted","Data":"079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5"} Jan 05 23:37:44 crc kubenswrapper[5034]: I0105 23:37:44.472342 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d0f7073-d292-4943-8245-22b8485cfee9","Type":"ContainerStarted","Data":"d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5"} Jan 05 23:37:45 crc kubenswrapper[5034]: I0105 23:37:45.133971 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:45 crc kubenswrapper[5034]: I0105 23:37:45.137703 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:45 crc kubenswrapper[5034]: I0105 23:37:45.487255 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:46 crc kubenswrapper[5034]: I0105 23:37:46.495803 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d0f7073-d292-4943-8245-22b8485cfee9","Type":"ContainerStarted","Data":"34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1"} Jan 05 23:37:46 crc kubenswrapper[5034]: I0105 23:37:46.497584 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 23:37:46 crc kubenswrapper[5034]: I0105 23:37:46.839066 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:37:46 crc kubenswrapper[5034]: E0105 23:37:46.839406 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.044724 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.848135948 podStartE2EDuration="7.044696835s" podCreationTimestamp="2026-01-05 23:37:40 +0000 UTC" firstStartedPulling="2026-01-05 23:37:41.132551787 +0000 UTC m=+6353.504551226" lastFinishedPulling="2026-01-05 23:37:45.329112674 +0000 UTC m=+6357.701112113" observedRunningTime="2026-01-05 23:37:46.520570012 +0000 UTC m=+6358.892569451" watchObservedRunningTime="2026-01-05 23:37:47.044696835 +0000 UTC m=+6359.416696274" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.049048 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.049687 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" containerName="openstackclient" containerID="cri-o://9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce" gracePeriod=2 Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.058827 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.079314 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 05 23:37:47 crc kubenswrapper[5034]: E0105 23:37:47.079914 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" containerName="openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.079939 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" containerName="openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.080151 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" containerName="openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.081001 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.085476 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" podUID="a84f3fae-075e-46ce-9d73-12611ea3eebd" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.115433 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.192665 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a84f3fae-075e-46ce-9d73-12611ea3eebd-openstack-config-secret\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.192715 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2nf\" (UniqueName: \"kubernetes.io/projected/a84f3fae-075e-46ce-9d73-12611ea3eebd-kube-api-access-9x2nf\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.192849 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84f3fae-075e-46ce-9d73-12611ea3eebd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.193057 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a84f3fae-075e-46ce-9d73-12611ea3eebd-openstack-config\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.294695 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a84f3fae-075e-46ce-9d73-12611ea3eebd-openstack-config\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.294797 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a84f3fae-075e-46ce-9d73-12611ea3eebd-openstack-config-secret\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.294817 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x2nf\" (UniqueName: \"kubernetes.io/projected/a84f3fae-075e-46ce-9d73-12611ea3eebd-kube-api-access-9x2nf\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.294856 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84f3fae-075e-46ce-9d73-12611ea3eebd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.295665 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a84f3fae-075e-46ce-9d73-12611ea3eebd-openstack-config\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.306026 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a84f3fae-075e-46ce-9d73-12611ea3eebd-openstack-config-secret\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.306288 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84f3fae-075e-46ce-9d73-12611ea3eebd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.315145 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x2nf\" (UniqueName: \"kubernetes.io/projected/a84f3fae-075e-46ce-9d73-12611ea3eebd-kube-api-access-9x2nf\") pod \"openstackclient\" (UID: \"a84f3fae-075e-46ce-9d73-12611ea3eebd\") " pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.407625 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 23:37:47 crc kubenswrapper[5034]: W0105 23:37:47.945191 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda84f3fae_075e_46ce_9d73_12611ea3eebd.slice/crio-5be7573706c0a2dff878ecd7ba48727ac1e737bb1d843f2916da22f6b688f54a WatchSource:0}: Error finding container 5be7573706c0a2dff878ecd7ba48727ac1e737bb1d843f2916da22f6b688f54a: Status 404 returned error can't find the container with id 5be7573706c0a2dff878ecd7ba48727ac1e737bb1d843f2916da22f6b688f54a Jan 05 23:37:47 crc kubenswrapper[5034]: I0105 23:37:47.951712 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 23:37:48 crc kubenswrapper[5034]: I0105 23:37:48.531216 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a84f3fae-075e-46ce-9d73-12611ea3eebd","Type":"ContainerStarted","Data":"edec81eee019a5f1e11a9915e8f7dace6fb1d1aa0acc5f91c76df7bdfdb34039"} Jan 05 23:37:48 crc kubenswrapper[5034]: I0105 23:37:48.531778 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a84f3fae-075e-46ce-9d73-12611ea3eebd","Type":"ContainerStarted","Data":"5be7573706c0a2dff878ecd7ba48727ac1e737bb1d843f2916da22f6b688f54a"} Jan 05 23:37:48 crc kubenswrapper[5034]: I0105 23:37:48.553785 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 23:37:48 crc kubenswrapper[5034]: I0105 23:37:48.554428 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="prometheus" containerID="cri-o://e1e6f1996a116d86fc2dc2f5c385684aeaec0ef3e01427bdb723d70eccd8b282" gracePeriod=600 Jan 05 23:37:48 crc kubenswrapper[5034]: I0105 23:37:48.554462 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="thanos-sidecar" containerID="cri-o://0b5f7010e5b351b961562333aa1d4b67a1ed89801f383a91d145aae35a129339" gracePeriod=600 Jan 05 23:37:48 crc kubenswrapper[5034]: I0105 23:37:48.554546 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="config-reloader" containerID="cri-o://4a72a8bbb105a1ca5f931fbc8643a5dbeec356e909b2138330882e070712f82f" gracePeriod=600 Jan 05 23:37:48 crc kubenswrapper[5034]: I0105 23:37:48.571728 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.5694870490000001 podStartE2EDuration="1.569487049s" podCreationTimestamp="2026-01-05 23:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:37:48.560331888 +0000 UTC m=+6360.932331327" watchObservedRunningTime="2026-01-05 23:37:48.569487049 +0000 UTC m=+6360.941486488" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.475151 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.479725 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" podUID="a84f3fae-075e-46ce-9d73-12611ea3eebd" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.546816 5034 generic.go:334] "Generic (PLEG): container finished" podID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" containerID="9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce" exitCode=137 Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.546905 5034 scope.go:117] "RemoveContainer" containerID="9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.547051 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.555576 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" podUID="a84f3fae-075e-46ce-9d73-12611ea3eebd" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.566211 5034 generic.go:334] "Generic (PLEG): container finished" podID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerID="0b5f7010e5b351b961562333aa1d4b67a1ed89801f383a91d145aae35a129339" exitCode=0 Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.566259 5034 generic.go:334] "Generic (PLEG): container finished" podID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerID="4a72a8bbb105a1ca5f931fbc8643a5dbeec356e909b2138330882e070712f82f" exitCode=0 Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.566269 5034 generic.go:334] "Generic (PLEG): container finished" podID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerID="e1e6f1996a116d86fc2dc2f5c385684aeaec0ef3e01427bdb723d70eccd8b282" exitCode=0 Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.566641 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"73aca9b8-f04f-4490-a49d-128f8c9686c8","Type":"ContainerDied","Data":"0b5f7010e5b351b961562333aa1d4b67a1ed89801f383a91d145aae35a129339"} Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.566684 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"73aca9b8-f04f-4490-a49d-128f8c9686c8","Type":"ContainerDied","Data":"4a72a8bbb105a1ca5f931fbc8643a5dbeec356e909b2138330882e070712f82f"} Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.566700 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"73aca9b8-f04f-4490-a49d-128f8c9686c8","Type":"ContainerDied","Data":"e1e6f1996a116d86fc2dc2f5c385684aeaec0ef3e01427bdb723d70eccd8b282"} Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.570064 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config-secret\") pod \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.570185 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-combined-ca-bundle\") pod \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.570278 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnkm8\" (UniqueName: \"kubernetes.io/projected/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-kube-api-access-nnkm8\") pod \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.570362 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config\") pod \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\" (UID: \"63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.577788 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-kube-api-access-nnkm8" (OuterVolumeSpecName: "kube-api-access-nnkm8") pod "63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" (UID: "63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db"). InnerVolumeSpecName "kube-api-access-nnkm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.607424 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" (UID: "63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.610994 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" (UID: "63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.654104 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" (UID: "63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.675686 5034 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.675790 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.675804 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnkm8\" (UniqueName: \"kubernetes.io/projected/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-kube-api-access-nnkm8\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.675815 5034 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.743434 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.752281 5034 scope.go:117] "RemoveContainer" containerID="9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce" Jan 05 23:37:49 crc kubenswrapper[5034]: E0105 23:37:49.752959 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce\": container with ID starting with 9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce not found: ID does not exist" containerID="9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.753006 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce"} err="failed to get container status \"9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce\": rpc error: code = NotFound desc = could not find container \"9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce\": container with ID starting with 9067aec28ff36994951075d93a83acf1781d4589b74b24b198129162828f21ce not found: ID does not exist" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.874683 5034 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" podUID="a84f3fae-075e-46ce-9d73-12611ea3eebd" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.881915 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-tls-assets\") pod \"73aca9b8-f04f-4490-a49d-128f8c9686c8\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.881997 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/73aca9b8-f04f-4490-a49d-128f8c9686c8-config-out\") pod \"73aca9b8-f04f-4490-a49d-128f8c9686c8\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.882033 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-config\") pod \"73aca9b8-f04f-4490-a49d-128f8c9686c8\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.882272 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") pod \"73aca9b8-f04f-4490-a49d-128f8c9686c8\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.882355 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-1\") pod \"73aca9b8-f04f-4490-a49d-128f8c9686c8\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.882457 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-0\") pod \"73aca9b8-f04f-4490-a49d-128f8c9686c8\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.882497 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-web-config\") pod \"73aca9b8-f04f-4490-a49d-128f8c9686c8\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.882618 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-thanos-prometheus-http-client-file\") pod \"73aca9b8-f04f-4490-a49d-128f8c9686c8\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.882719 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8276s\" (UniqueName: \"kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-kube-api-access-8276s\") pod \"73aca9b8-f04f-4490-a49d-128f8c9686c8\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.882747 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-2\") pod \"73aca9b8-f04f-4490-a49d-128f8c9686c8\" (UID: \"73aca9b8-f04f-4490-a49d-128f8c9686c8\") " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.884562 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "73aca9b8-f04f-4490-a49d-128f8c9686c8" (UID: "73aca9b8-f04f-4490-a49d-128f8c9686c8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.884845 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "73aca9b8-f04f-4490-a49d-128f8c9686c8" (UID: "73aca9b8-f04f-4490-a49d-128f8c9686c8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.884897 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "73aca9b8-f04f-4490-a49d-128f8c9686c8" (UID: "73aca9b8-f04f-4490-a49d-128f8c9686c8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.891349 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "73aca9b8-f04f-4490-a49d-128f8c9686c8" (UID: "73aca9b8-f04f-4490-a49d-128f8c9686c8"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.892114 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-config" (OuterVolumeSpecName: "config") pod "73aca9b8-f04f-4490-a49d-128f8c9686c8" (UID: "73aca9b8-f04f-4490-a49d-128f8c9686c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.892224 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db" path="/var/lib/kubelet/pods/63105d5e-6b7c-41b8-a6b5-58a3e1d9d2db/volumes" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.892494 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-kube-api-access-8276s" (OuterVolumeSpecName: "kube-api-access-8276s") pod "73aca9b8-f04f-4490-a49d-128f8c9686c8" (UID: "73aca9b8-f04f-4490-a49d-128f8c9686c8"). InnerVolumeSpecName "kube-api-access-8276s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.899163 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "73aca9b8-f04f-4490-a49d-128f8c9686c8" (UID: "73aca9b8-f04f-4490-a49d-128f8c9686c8"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.899365 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73aca9b8-f04f-4490-a49d-128f8c9686c8-config-out" (OuterVolumeSpecName: "config-out") pod "73aca9b8-f04f-4490-a49d-128f8c9686c8" (UID: "73aca9b8-f04f-4490-a49d-128f8c9686c8"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.930950 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "73aca9b8-f04f-4490-a49d-128f8c9686c8" (UID: "73aca9b8-f04f-4490-a49d-128f8c9686c8"). InnerVolumeSpecName "pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.949540 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-web-config" (OuterVolumeSpecName: "web-config") pod "73aca9b8-f04f-4490-a49d-128f8c9686c8" (UID: "73aca9b8-f04f-4490-a49d-128f8c9686c8"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.985366 5034 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.985412 5034 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-web-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.985430 5034 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.985447 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8276s\" (UniqueName: \"kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-kube-api-access-8276s\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.985460 5034 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.985477 5034 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/73aca9b8-f04f-4490-a49d-128f8c9686c8-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.985488 5034 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/73aca9b8-f04f-4490-a49d-128f8c9686c8-config-out\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.985499 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/73aca9b8-f04f-4490-a49d-128f8c9686c8-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.985539 5034 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") on node \"crc\" " Jan 05 23:37:49 crc kubenswrapper[5034]: I0105 23:37:49.985550 5034 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/73aca9b8-f04f-4490-a49d-128f8c9686c8-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.013383 5034 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.013601 5034 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2") on node "crc" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.087996 5034 reconciler_common.go:293] "Volume detached for volume \"pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.283397 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-vmrvq"] Jan 05 23:37:50 crc kubenswrapper[5034]: E0105 23:37:50.284152 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="thanos-sidecar" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.284173 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="thanos-sidecar" Jan 05 23:37:50 crc kubenswrapper[5034]: E0105 23:37:50.284219 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="prometheus" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.284228 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="prometheus" Jan 05 23:37:50 crc kubenswrapper[5034]: E0105 23:37:50.284240 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="config-reloader" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.284249 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="config-reloader" Jan 05 23:37:50 crc kubenswrapper[5034]: E0105 23:37:50.284275 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="init-config-reloader" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.284284 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="init-config-reloader" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.284501 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="config-reloader" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.284522 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="prometheus" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.284537 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" containerName="thanos-sidecar" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.285395 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vmrvq" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.299993 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vmrvq"] Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.394436 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1bcc6da-3adc-495c-b32c-c328ecd78165-operator-scripts\") pod \"aodh-db-create-vmrvq\" (UID: \"c1bcc6da-3adc-495c-b32c-c328ecd78165\") " pod="openstack/aodh-db-create-vmrvq" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.394845 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcm8c\" (UniqueName: \"kubernetes.io/projected/c1bcc6da-3adc-495c-b32c-c328ecd78165-kube-api-access-tcm8c\") pod \"aodh-db-create-vmrvq\" (UID: \"c1bcc6da-3adc-495c-b32c-c328ecd78165\") " pod="openstack/aodh-db-create-vmrvq" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.498036 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcm8c\" (UniqueName: \"kubernetes.io/projected/c1bcc6da-3adc-495c-b32c-c328ecd78165-kube-api-access-tcm8c\") pod \"aodh-db-create-vmrvq\" (UID: \"c1bcc6da-3adc-495c-b32c-c328ecd78165\") " pod="openstack/aodh-db-create-vmrvq" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.498220 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1bcc6da-3adc-495c-b32c-c328ecd78165-operator-scripts\") pod \"aodh-db-create-vmrvq\" (UID: \"c1bcc6da-3adc-495c-b32c-c328ecd78165\") " pod="openstack/aodh-db-create-vmrvq" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.499296 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1bcc6da-3adc-495c-b32c-c328ecd78165-operator-scripts\") pod \"aodh-db-create-vmrvq\" (UID: \"c1bcc6da-3adc-495c-b32c-c328ecd78165\") " pod="openstack/aodh-db-create-vmrvq" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.499445 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-9a4c-account-create-update-89d7j"] Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.501234 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-9a4c-account-create-update-89d7j" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.507519 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.538976 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcm8c\" (UniqueName: \"kubernetes.io/projected/c1bcc6da-3adc-495c-b32c-c328ecd78165-kube-api-access-tcm8c\") pod \"aodh-db-create-vmrvq\" (UID: \"c1bcc6da-3adc-495c-b32c-c328ecd78165\") " pod="openstack/aodh-db-create-vmrvq" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.599371 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-9a4c-account-create-update-89d7j"] Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.600766 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"73aca9b8-f04f-4490-a49d-128f8c9686c8","Type":"ContainerDied","Data":"fe9e4fb6acedd7e97546e2d44be61f7bda7aecb7642c608a2673de74605630cd"} Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.600818 5034 scope.go:117] "RemoveContainer" containerID="0b5f7010e5b351b961562333aa1d4b67a1ed89801f383a91d145aae35a129339" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.600771 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l56dd\" (UniqueName: \"kubernetes.io/projected/6b6254af-35d4-4259-869d-194ae72e9c8a-kube-api-access-l56dd\") pod \"aodh-9a4c-account-create-update-89d7j\" (UID: \"6b6254af-35d4-4259-869d-194ae72e9c8a\") " pod="openstack/aodh-9a4c-account-create-update-89d7j" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.600906 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.601164 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6254af-35d4-4259-869d-194ae72e9c8a-operator-scripts\") pod \"aodh-9a4c-account-create-update-89d7j\" (UID: \"6b6254af-35d4-4259-869d-194ae72e9c8a\") " pod="openstack/aodh-9a4c-account-create-update-89d7j" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.608993 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vmrvq" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.644310 5034 scope.go:117] "RemoveContainer" containerID="4a72a8bbb105a1ca5f931fbc8643a5dbeec356e909b2138330882e070712f82f" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.715019 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l56dd\" (UniqueName: \"kubernetes.io/projected/6b6254af-35d4-4259-869d-194ae72e9c8a-kube-api-access-l56dd\") pod \"aodh-9a4c-account-create-update-89d7j\" (UID: \"6b6254af-35d4-4259-869d-194ae72e9c8a\") " pod="openstack/aodh-9a4c-account-create-update-89d7j" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.715229 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6254af-35d4-4259-869d-194ae72e9c8a-operator-scripts\") pod \"aodh-9a4c-account-create-update-89d7j\" (UID: \"6b6254af-35d4-4259-869d-194ae72e9c8a\") " pod="openstack/aodh-9a4c-account-create-update-89d7j" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.716535 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6254af-35d4-4259-869d-194ae72e9c8a-operator-scripts\") pod \"aodh-9a4c-account-create-update-89d7j\" (UID: \"6b6254af-35d4-4259-869d-194ae72e9c8a\") " pod="openstack/aodh-9a4c-account-create-update-89d7j" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.760914 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l56dd\" (UniqueName: \"kubernetes.io/projected/6b6254af-35d4-4259-869d-194ae72e9c8a-kube-api-access-l56dd\") pod \"aodh-9a4c-account-create-update-89d7j\" (UID: \"6b6254af-35d4-4259-869d-194ae72e9c8a\") " pod="openstack/aodh-9a4c-account-create-update-89d7j" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.801250 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.812303 5034 scope.go:117] "RemoveContainer" containerID="e1e6f1996a116d86fc2dc2f5c385684aeaec0ef3e01427bdb723d70eccd8b282" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.820021 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-9a4c-account-create-update-89d7j" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.831490 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.851180 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.854022 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.870102 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.870557 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.870974 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.871103 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.871208 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.873659 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.873884 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.874011 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pwjz8" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.885181 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.896322 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 23:37:50 crc kubenswrapper[5034]: I0105 23:37:50.954491 5034 scope.go:117] "RemoveContainer" containerID="5548450e6c0c193e74140977d4565679da0011906488c6b5227fe473596021de" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.038588 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/72d2c93e-1be9-4e89-878e-3f802869275c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.039061 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjswh\" (UniqueName: \"kubernetes.io/projected/72d2c93e-1be9-4e89-878e-3f802869275c-kube-api-access-tjswh\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.039159 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.039200 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-config\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.039241 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/72d2c93e-1be9-4e89-878e-3f802869275c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.039677 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/72d2c93e-1be9-4e89-878e-3f802869275c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.039778 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.039861 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.039911 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.039995 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/72d2c93e-1be9-4e89-878e-3f802869275c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.040041 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.040388 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/72d2c93e-1be9-4e89-878e-3f802869275c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.040578 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.143904 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.143983 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144012 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/72d2c93e-1be9-4e89-878e-3f802869275c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144041 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144109 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/72d2c93e-1be9-4e89-878e-3f802869275c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144158 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144234 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/72d2c93e-1be9-4e89-878e-3f802869275c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144297 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjswh\" (UniqueName: \"kubernetes.io/projected/72d2c93e-1be9-4e89-878e-3f802869275c-kube-api-access-tjswh\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144347 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-config\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144372 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144395 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/72d2c93e-1be9-4e89-878e-3f802869275c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144479 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/72d2c93e-1be9-4e89-878e-3f802869275c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.144518 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.154808 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/72d2c93e-1be9-4e89-878e-3f802869275c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.154892 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/72d2c93e-1be9-4e89-878e-3f802869275c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.155828 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/72d2c93e-1be9-4e89-878e-3f802869275c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.158883 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.160003 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/72d2c93e-1be9-4e89-878e-3f802869275c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.161822 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-config\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.166236 5034 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.166288 5034 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2f416067a4e274c58975fa574303358506362fcca46365bf62044b89d6a3e9d4/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.167748 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/72d2c93e-1be9-4e89-878e-3f802869275c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.170048 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.170253 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.170531 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.172731 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/72d2c93e-1be9-4e89-878e-3f802869275c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.175674 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjswh\" (UniqueName: \"kubernetes.io/projected/72d2c93e-1be9-4e89-878e-3f802869275c-kube-api-access-tjswh\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.217519 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f42fe4-5deb-4967-a97b-d6e49406e8f2\") pod \"prometheus-metric-storage-0\" (UID: \"72d2c93e-1be9-4e89-878e-3f802869275c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.263367 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.448003 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vmrvq"] Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.598418 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-9a4c-account-create-update-89d7j"] Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.632507 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vmrvq" event={"ID":"c1bcc6da-3adc-495c-b32c-c328ecd78165","Type":"ContainerStarted","Data":"b8cdd6416ecfe2cde606b4ed14c3eb9edb3e5412435ab5345e551c341696d401"} Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.850764 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73aca9b8-f04f-4490-a49d-128f8c9686c8" path="/var/lib/kubelet/pods/73aca9b8-f04f-4490-a49d-128f8c9686c8/volumes" Jan 05 23:37:51 crc kubenswrapper[5034]: W0105 23:37:51.869061 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d2c93e_1be9_4e89_878e_3f802869275c.slice/crio-08d9129ac2a33d0d361fc3cbbab2b713be8391ee160a6200b41dd9957cfd6783 WatchSource:0}: Error finding container 08d9129ac2a33d0d361fc3cbbab2b713be8391ee160a6200b41dd9957cfd6783: Status 404 returned error can't find the container with id 08d9129ac2a33d0d361fc3cbbab2b713be8391ee160a6200b41dd9957cfd6783 Jan 05 23:37:51 crc kubenswrapper[5034]: I0105 23:37:51.871404 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 23:37:52 crc kubenswrapper[5034]: I0105 23:37:52.660411 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"72d2c93e-1be9-4e89-878e-3f802869275c","Type":"ContainerStarted","Data":"08d9129ac2a33d0d361fc3cbbab2b713be8391ee160a6200b41dd9957cfd6783"} Jan 05 23:37:52 crc kubenswrapper[5034]: I0105 23:37:52.662217 5034 generic.go:334] "Generic (PLEG): container finished" podID="c1bcc6da-3adc-495c-b32c-c328ecd78165" containerID="3ccadc54da0f3e5b4e3dec95722672b4e917a815cd3412218b05c1e964987b52" exitCode=0 Jan 05 23:37:52 crc kubenswrapper[5034]: I0105 23:37:52.662794 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vmrvq" event={"ID":"c1bcc6da-3adc-495c-b32c-c328ecd78165","Type":"ContainerDied","Data":"3ccadc54da0f3e5b4e3dec95722672b4e917a815cd3412218b05c1e964987b52"} Jan 05 23:37:52 crc kubenswrapper[5034]: I0105 23:37:52.665688 5034 generic.go:334] "Generic (PLEG): container finished" podID="6b6254af-35d4-4259-869d-194ae72e9c8a" containerID="02a8ba172b626ca036caa14c60a73cdfcc37fca184afaf3f28e2d30be8a53d3b" exitCode=0 Jan 05 23:37:52 crc kubenswrapper[5034]: I0105 23:37:52.665742 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-9a4c-account-create-update-89d7j" event={"ID":"6b6254af-35d4-4259-869d-194ae72e9c8a","Type":"ContainerDied","Data":"02a8ba172b626ca036caa14c60a73cdfcc37fca184afaf3f28e2d30be8a53d3b"} Jan 05 23:37:52 crc kubenswrapper[5034]: I0105 23:37:52.665774 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-9a4c-account-create-update-89d7j" event={"ID":"6b6254af-35d4-4259-869d-194ae72e9c8a","Type":"ContainerStarted","Data":"555a4a4b6b12e9b7aa159f04e4468c50004c17948a155b81c81696f244afbfd8"} Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.687748 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vmrvq" event={"ID":"c1bcc6da-3adc-495c-b32c-c328ecd78165","Type":"ContainerDied","Data":"b8cdd6416ecfe2cde606b4ed14c3eb9edb3e5412435ab5345e551c341696d401"} Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.688197 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8cdd6416ecfe2cde606b4ed14c3eb9edb3e5412435ab5345e551c341696d401" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.689580 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-9a4c-account-create-update-89d7j" event={"ID":"6b6254af-35d4-4259-869d-194ae72e9c8a","Type":"ContainerDied","Data":"555a4a4b6b12e9b7aa159f04e4468c50004c17948a155b81c81696f244afbfd8"} Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.689617 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="555a4a4b6b12e9b7aa159f04e4468c50004c17948a155b81c81696f244afbfd8" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.719522 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-9a4c-account-create-update-89d7j" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.729543 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vmrvq" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.841071 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcm8c\" (UniqueName: \"kubernetes.io/projected/c1bcc6da-3adc-495c-b32c-c328ecd78165-kube-api-access-tcm8c\") pod \"c1bcc6da-3adc-495c-b32c-c328ecd78165\" (UID: \"c1bcc6da-3adc-495c-b32c-c328ecd78165\") " Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.841521 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1bcc6da-3adc-495c-b32c-c328ecd78165-operator-scripts\") pod \"c1bcc6da-3adc-495c-b32c-c328ecd78165\" (UID: \"c1bcc6da-3adc-495c-b32c-c328ecd78165\") " Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.841712 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l56dd\" (UniqueName: \"kubernetes.io/projected/6b6254af-35d4-4259-869d-194ae72e9c8a-kube-api-access-l56dd\") pod \"6b6254af-35d4-4259-869d-194ae72e9c8a\" (UID: \"6b6254af-35d4-4259-869d-194ae72e9c8a\") " Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.841767 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6254af-35d4-4259-869d-194ae72e9c8a-operator-scripts\") pod \"6b6254af-35d4-4259-869d-194ae72e9c8a\" (UID: \"6b6254af-35d4-4259-869d-194ae72e9c8a\") " Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.842193 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1bcc6da-3adc-495c-b32c-c328ecd78165-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1bcc6da-3adc-495c-b32c-c328ecd78165" (UID: "c1bcc6da-3adc-495c-b32c-c328ecd78165"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.842598 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6254af-35d4-4259-869d-194ae72e9c8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b6254af-35d4-4259-869d-194ae72e9c8a" (UID: "6b6254af-35d4-4259-869d-194ae72e9c8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.844315 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6254af-35d4-4259-869d-194ae72e9c8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.844368 5034 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1bcc6da-3adc-495c-b32c-c328ecd78165-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.849787 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1bcc6da-3adc-495c-b32c-c328ecd78165-kube-api-access-tcm8c" (OuterVolumeSpecName: "kube-api-access-tcm8c") pod "c1bcc6da-3adc-495c-b32c-c328ecd78165" (UID: "c1bcc6da-3adc-495c-b32c-c328ecd78165"). InnerVolumeSpecName "kube-api-access-tcm8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.850431 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6254af-35d4-4259-869d-194ae72e9c8a-kube-api-access-l56dd" (OuterVolumeSpecName: "kube-api-access-l56dd") pod "6b6254af-35d4-4259-869d-194ae72e9c8a" (UID: "6b6254af-35d4-4259-869d-194ae72e9c8a"). InnerVolumeSpecName "kube-api-access-l56dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.946605 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcm8c\" (UniqueName: \"kubernetes.io/projected/c1bcc6da-3adc-495c-b32c-c328ecd78165-kube-api-access-tcm8c\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:54 crc kubenswrapper[5034]: I0105 23:37:54.946644 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l56dd\" (UniqueName: \"kubernetes.io/projected/6b6254af-35d4-4259-869d-194ae72e9c8a-kube-api-access-l56dd\") on node \"crc\" DevicePath \"\"" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.700406 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vmrvq" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.700426 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-9a4c-account-create-update-89d7j" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.861376 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-klqvq"] Jan 05 23:37:55 crc kubenswrapper[5034]: E0105 23:37:55.861799 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1bcc6da-3adc-495c-b32c-c328ecd78165" containerName="mariadb-database-create" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.861817 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1bcc6da-3adc-495c-b32c-c328ecd78165" containerName="mariadb-database-create" Jan 05 23:37:55 crc kubenswrapper[5034]: E0105 23:37:55.861832 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6254af-35d4-4259-869d-194ae72e9c8a" containerName="mariadb-account-create-update" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.861838 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6254af-35d4-4259-869d-194ae72e9c8a" containerName="mariadb-account-create-update" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.862151 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6254af-35d4-4259-869d-194ae72e9c8a" containerName="mariadb-account-create-update" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.862178 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1bcc6da-3adc-495c-b32c-c328ecd78165" containerName="mariadb-database-create" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.867655 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.870482 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klqvq"] Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.972023 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-utilities\") pod \"redhat-operators-klqvq\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.972502 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mswtz\" (UniqueName: \"kubernetes.io/projected/09d918a1-ce14-4606-829e-af0575fa3505-kube-api-access-mswtz\") pod \"redhat-operators-klqvq\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:55 crc kubenswrapper[5034]: I0105 23:37:55.972557 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-catalog-content\") pod \"redhat-operators-klqvq\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:56 crc kubenswrapper[5034]: I0105 23:37:56.075248 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-catalog-content\") pod \"redhat-operators-klqvq\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:56 crc kubenswrapper[5034]: I0105 23:37:56.075376 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-utilities\") pod \"redhat-operators-klqvq\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:56 crc kubenswrapper[5034]: I0105 23:37:56.075644 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mswtz\" (UniqueName: \"kubernetes.io/projected/09d918a1-ce14-4606-829e-af0575fa3505-kube-api-access-mswtz\") pod \"redhat-operators-klqvq\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:56 crc kubenswrapper[5034]: I0105 23:37:56.075973 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-utilities\") pod \"redhat-operators-klqvq\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:56 crc kubenswrapper[5034]: I0105 23:37:56.076305 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-catalog-content\") pod \"redhat-operators-klqvq\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:56 crc kubenswrapper[5034]: I0105 23:37:56.098984 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mswtz\" (UniqueName: \"kubernetes.io/projected/09d918a1-ce14-4606-829e-af0575fa3505-kube-api-access-mswtz\") pod \"redhat-operators-klqvq\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:56 crc kubenswrapper[5034]: I0105 23:37:56.194351 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:37:56 crc kubenswrapper[5034]: W0105 23:37:56.656826 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09d918a1_ce14_4606_829e_af0575fa3505.slice/crio-4b6d12c32a9d14ca5172dc5ef170bdeca6e984f33515309d736ca7816754fe5b WatchSource:0}: Error finding container 4b6d12c32a9d14ca5172dc5ef170bdeca6e984f33515309d736ca7816754fe5b: Status 404 returned error can't find the container with id 4b6d12c32a9d14ca5172dc5ef170bdeca6e984f33515309d736ca7816754fe5b Jan 05 23:37:56 crc kubenswrapper[5034]: I0105 23:37:56.669405 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klqvq"] Jan 05 23:37:56 crc kubenswrapper[5034]: I0105 23:37:56.711985 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klqvq" event={"ID":"09d918a1-ce14-4606-829e-af0575fa3505","Type":"ContainerStarted","Data":"4b6d12c32a9d14ca5172dc5ef170bdeca6e984f33515309d736ca7816754fe5b"} Jan 05 23:37:56 crc kubenswrapper[5034]: I0105 23:37:56.714264 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"72d2c93e-1be9-4e89-878e-3f802869275c","Type":"ContainerStarted","Data":"aa8e037f13f4d0cbc1c4c3142d05103fb06ddaece5db0776e2bba0f5eda8f2d5"} Jan 05 23:37:57 crc kubenswrapper[5034]: I0105 23:37:57.725571 5034 generic.go:334] "Generic (PLEG): container finished" podID="09d918a1-ce14-4606-829e-af0575fa3505" containerID="2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea" exitCode=0 Jan 05 23:37:57 crc kubenswrapper[5034]: I0105 23:37:57.725739 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klqvq" event={"ID":"09d918a1-ce14-4606-829e-af0575fa3505","Type":"ContainerDied","Data":"2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea"} Jan 05 23:37:57 crc kubenswrapper[5034]: I0105 23:37:57.851606 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:37:57 crc kubenswrapper[5034]: E0105 23:37:57.851973 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:37:59 crc kubenswrapper[5034]: I0105 23:37:59.752833 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klqvq" event={"ID":"09d918a1-ce14-4606-829e-af0575fa3505","Type":"ContainerStarted","Data":"ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005"} Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.851093 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-8d82c"] Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.853030 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.858911 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.859601 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.859873 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.860250 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-stdcb" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.874447 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-8d82c"] Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.891678 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp7ph\" (UniqueName: \"kubernetes.io/projected/90b334f9-01bc-4ec5-a98e-a65e946939c9-kube-api-access-cp7ph\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.891737 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-config-data\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.891772 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-combined-ca-bundle\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.891906 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-scripts\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.994988 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp7ph\" (UniqueName: \"kubernetes.io/projected/90b334f9-01bc-4ec5-a98e-a65e946939c9-kube-api-access-cp7ph\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.995102 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-config-data\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.995137 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-combined-ca-bundle\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:00 crc kubenswrapper[5034]: I0105 23:38:00.995275 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-scripts\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:01 crc kubenswrapper[5034]: I0105 23:38:01.001993 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-combined-ca-bundle\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:01 crc kubenswrapper[5034]: I0105 23:38:01.003148 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-scripts\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:01 crc kubenswrapper[5034]: I0105 23:38:01.015387 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-config-data\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:01 crc kubenswrapper[5034]: I0105 23:38:01.015409 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp7ph\" (UniqueName: \"kubernetes.io/projected/90b334f9-01bc-4ec5-a98e-a65e946939c9-kube-api-access-cp7ph\") pod \"aodh-db-sync-8d82c\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:01 crc kubenswrapper[5034]: I0105 23:38:01.185889 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:01 crc kubenswrapper[5034]: I0105 23:38:01.721006 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-8d82c"] Jan 05 23:38:01 crc kubenswrapper[5034]: W0105 23:38:01.737245 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90b334f9_01bc_4ec5_a98e_a65e946939c9.slice/crio-40c447cf380ffaf5710c08848edb1f0006f8c8170597e99520c09e8a327523fb WatchSource:0}: Error finding container 40c447cf380ffaf5710c08848edb1f0006f8c8170597e99520c09e8a327523fb: Status 404 returned error can't find the container with id 40c447cf380ffaf5710c08848edb1f0006f8c8170597e99520c09e8a327523fb Jan 05 23:38:01 crc kubenswrapper[5034]: I0105 23:38:01.774370 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8d82c" event={"ID":"90b334f9-01bc-4ec5-a98e-a65e946939c9","Type":"ContainerStarted","Data":"40c447cf380ffaf5710c08848edb1f0006f8c8170597e99520c09e8a327523fb"} Jan 05 23:38:02 crc kubenswrapper[5034]: I0105 23:38:02.787191 5034 generic.go:334] "Generic (PLEG): container finished" podID="09d918a1-ce14-4606-829e-af0575fa3505" containerID="ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005" exitCode=0 Jan 05 23:38:02 crc kubenswrapper[5034]: I0105 23:38:02.787272 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klqvq" event={"ID":"09d918a1-ce14-4606-829e-af0575fa3505","Type":"ContainerDied","Data":"ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005"} Jan 05 23:38:02 crc kubenswrapper[5034]: I0105 23:38:02.790782 5034 generic.go:334] "Generic (PLEG): container finished" podID="72d2c93e-1be9-4e89-878e-3f802869275c" containerID="aa8e037f13f4d0cbc1c4c3142d05103fb06ddaece5db0776e2bba0f5eda8f2d5" exitCode=0 Jan 05 23:38:02 crc kubenswrapper[5034]: I0105 23:38:02.790818 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"72d2c93e-1be9-4e89-878e-3f802869275c","Type":"ContainerDied","Data":"aa8e037f13f4d0cbc1c4c3142d05103fb06ddaece5db0776e2bba0f5eda8f2d5"} Jan 05 23:38:03 crc kubenswrapper[5034]: I0105 23:38:03.825266 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"72d2c93e-1be9-4e89-878e-3f802869275c","Type":"ContainerStarted","Data":"85b11e9d7e4f03ad7b2dce6e17fbcd50dfc52ea5210a894fe39b367d26a59f8d"} Jan 05 23:38:06 crc kubenswrapper[5034]: I0105 23:38:06.863937 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klqvq" event={"ID":"09d918a1-ce14-4606-829e-af0575fa3505","Type":"ContainerStarted","Data":"24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297"} Jan 05 23:38:06 crc kubenswrapper[5034]: I0105 23:38:06.868351 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8d82c" event={"ID":"90b334f9-01bc-4ec5-a98e-a65e946939c9","Type":"ContainerStarted","Data":"31049c118ac1833c95aa687e3b52d15d9eec277676d5bdbb5b2b36c120e4c4be"} Jan 05 23:38:06 crc kubenswrapper[5034]: I0105 23:38:06.873831 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"72d2c93e-1be9-4e89-878e-3f802869275c","Type":"ContainerStarted","Data":"e273dd20faec01b879bc5846e90fc21ea8da94318d8c1b869dbf0d14ff4b7318"} Jan 05 23:38:06 crc kubenswrapper[5034]: I0105 23:38:06.893851 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-klqvq" podStartSLOduration=4.211071777 podStartE2EDuration="11.893827218s" podCreationTimestamp="2026-01-05 23:37:55 +0000 UTC" firstStartedPulling="2026-01-05 23:37:57.728102326 +0000 UTC m=+6370.100101765" lastFinishedPulling="2026-01-05 23:38:05.410857767 +0000 UTC m=+6377.782857206" observedRunningTime="2026-01-05 23:38:06.885573792 +0000 UTC m=+6379.257573231" watchObservedRunningTime="2026-01-05 23:38:06.893827218 +0000 UTC m=+6379.265826657" Jan 05 23:38:06 crc kubenswrapper[5034]: I0105 23:38:06.917540 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-8d82c" podStartSLOduration=2.739762602 podStartE2EDuration="6.917510113s" podCreationTimestamp="2026-01-05 23:38:00 +0000 UTC" firstStartedPulling="2026-01-05 23:38:01.74287725 +0000 UTC m=+6374.114876699" lastFinishedPulling="2026-01-05 23:38:05.920624771 +0000 UTC m=+6378.292624210" observedRunningTime="2026-01-05 23:38:06.906461788 +0000 UTC m=+6379.278461227" watchObservedRunningTime="2026-01-05 23:38:06.917510113 +0000 UTC m=+6379.289509552" Jan 05 23:38:07 crc kubenswrapper[5034]: I0105 23:38:07.900088 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"72d2c93e-1be9-4e89-878e-3f802869275c","Type":"ContainerStarted","Data":"b274287fbbe1c0a326dabd816f4c251d575fbeda66b23b1b6cc0cec5f70cb999"} Jan 05 23:38:07 crc kubenswrapper[5034]: I0105 23:38:07.937109 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.93706994 podStartE2EDuration="17.93706994s" podCreationTimestamp="2026-01-05 23:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:38:07.932995484 +0000 UTC m=+6380.304994923" watchObservedRunningTime="2026-01-05 23:38:07.93706994 +0000 UTC m=+6380.309069369" Jan 05 23:38:09 crc kubenswrapper[5034]: I0105 23:38:09.920122 5034 generic.go:334] "Generic (PLEG): container finished" podID="90b334f9-01bc-4ec5-a98e-a65e946939c9" containerID="31049c118ac1833c95aa687e3b52d15d9eec277676d5bdbb5b2b36c120e4c4be" exitCode=0 Jan 05 23:38:09 crc kubenswrapper[5034]: I0105 23:38:09.920248 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8d82c" event={"ID":"90b334f9-01bc-4ec5-a98e-a65e946939c9","Type":"ContainerDied","Data":"31049c118ac1833c95aa687e3b52d15d9eec277676d5bdbb5b2b36c120e4c4be"} Jan 05 23:38:10 crc kubenswrapper[5034]: I0105 23:38:10.552655 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.264708 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.296899 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.405336 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp7ph\" (UniqueName: \"kubernetes.io/projected/90b334f9-01bc-4ec5-a98e-a65e946939c9-kube-api-access-cp7ph\") pod \"90b334f9-01bc-4ec5-a98e-a65e946939c9\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.405580 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-scripts\") pod \"90b334f9-01bc-4ec5-a98e-a65e946939c9\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.405615 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-config-data\") pod \"90b334f9-01bc-4ec5-a98e-a65e946939c9\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.405691 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-combined-ca-bundle\") pod \"90b334f9-01bc-4ec5-a98e-a65e946939c9\" (UID: \"90b334f9-01bc-4ec5-a98e-a65e946939c9\") " Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.430879 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-scripts" (OuterVolumeSpecName: "scripts") pod "90b334f9-01bc-4ec5-a98e-a65e946939c9" (UID: "90b334f9-01bc-4ec5-a98e-a65e946939c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.430919 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b334f9-01bc-4ec5-a98e-a65e946939c9-kube-api-access-cp7ph" (OuterVolumeSpecName: "kube-api-access-cp7ph") pod "90b334f9-01bc-4ec5-a98e-a65e946939c9" (UID: "90b334f9-01bc-4ec5-a98e-a65e946939c9"). InnerVolumeSpecName "kube-api-access-cp7ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.461730 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-config-data" (OuterVolumeSpecName: "config-data") pod "90b334f9-01bc-4ec5-a98e-a65e946939c9" (UID: "90b334f9-01bc-4ec5-a98e-a65e946939c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.470758 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90b334f9-01bc-4ec5-a98e-a65e946939c9" (UID: "90b334f9-01bc-4ec5-a98e-a65e946939c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.512588 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.513104 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.513122 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp7ph\" (UniqueName: \"kubernetes.io/projected/90b334f9-01bc-4ec5-a98e-a65e946939c9-kube-api-access-cp7ph\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.513137 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b334f9-01bc-4ec5-a98e-a65e946939c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.839185 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:38:11 crc kubenswrapper[5034]: E0105 23:38:11.839624 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.869890 5034 scope.go:117] "RemoveContainer" containerID="4a40f2be015b8dc9807fc47a4472d1954ea222c1ed9f3faa42158be4b4730895" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.893937 5034 scope.go:117] "RemoveContainer" containerID="383ced50a2766e5ffb170c33bc8c1079e41f66926989292eb0ae7452fcbdf955" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.958750 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8d82c" event={"ID":"90b334f9-01bc-4ec5-a98e-a65e946939c9","Type":"ContainerDied","Data":"40c447cf380ffaf5710c08848edb1f0006f8c8170597e99520c09e8a327523fb"} Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.958802 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40c447cf380ffaf5710c08848edb1f0006f8c8170597e99520c09e8a327523fb" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.958858 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8d82c" Jan 05 23:38:11 crc kubenswrapper[5034]: I0105 23:38:11.967812 5034 scope.go:117] "RemoveContainer" containerID="cf8ee75eaa56134b0c8b6a1467c78a06c5b25ed64f87b6ac90d87ab6d8870004" Jan 05 23:38:14 crc kubenswrapper[5034]: I0105 23:38:14.056024 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-l6mz5"] Jan 05 23:38:14 crc kubenswrapper[5034]: I0105 23:38:14.083611 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-869d-account-create-update-2dgbp"] Jan 05 23:38:14 crc kubenswrapper[5034]: I0105 23:38:14.129572 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dhgbn"] Jan 05 23:38:14 crc kubenswrapper[5034]: I0105 23:38:14.159345 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-l6mz5"] Jan 05 23:38:14 crc kubenswrapper[5034]: I0105 23:38:14.168965 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-869d-account-create-update-2dgbp"] Jan 05 23:38:14 crc kubenswrapper[5034]: I0105 23:38:14.179959 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dhgbn"] Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.043661 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7c86-account-create-update-plzz9"] Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.060129 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8rz48"] Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.075739 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-15ab-account-create-update-rmxq6"] Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.092334 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-15ab-account-create-update-rmxq6"] Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.107175 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7c86-account-create-update-plzz9"] Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.119298 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8rz48"] Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.527977 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 05 23:38:15 crc kubenswrapper[5034]: E0105 23:38:15.528606 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b334f9-01bc-4ec5-a98e-a65e946939c9" containerName="aodh-db-sync" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.528625 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b334f9-01bc-4ec5-a98e-a65e946939c9" containerName="aodh-db-sync" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.528856 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b334f9-01bc-4ec5-a98e-a65e946939c9" containerName="aodh-db-sync" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.531236 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.533612 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.535052 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-stdcb" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.535306 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.546732 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.651201 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-config-data\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.651768 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-combined-ca-bundle\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.651810 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-scripts\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.651874 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr2p9\" (UniqueName: \"kubernetes.io/projected/92dfe6ee-f148-404b-86f1-e6c5c5893732-kube-api-access-jr2p9\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.754029 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-config-data\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.754175 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-combined-ca-bundle\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.754216 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-scripts\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.754261 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr2p9\" (UniqueName: \"kubernetes.io/projected/92dfe6ee-f148-404b-86f1-e6c5c5893732-kube-api-access-jr2p9\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.765450 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-scripts\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.765978 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-combined-ca-bundle\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.766127 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-config-data\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.774550 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr2p9\" (UniqueName: \"kubernetes.io/projected/92dfe6ee-f148-404b-86f1-e6c5c5893732-kube-api-access-jr2p9\") pod \"aodh-0\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.851145 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2483105c-df62-435f-a09c-cc9750dd2850" path="/var/lib/kubelet/pods/2483105c-df62-435f-a09c-cc9750dd2850/volumes" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.852165 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5" path="/var/lib/kubelet/pods/3f7d5e14-ffb3-4ab6-9bbb-3f5b9c40e7b5/volumes" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.852774 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.852963 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47fc0f5e-1048-4339-b2f7-7a67916d7f0e" path="/var/lib/kubelet/pods/47fc0f5e-1048-4339-b2f7-7a67916d7f0e/volumes" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.854578 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89dfae35-d17d-42c1-b717-0bb03abf7fc7" path="/var/lib/kubelet/pods/89dfae35-d17d-42c1-b717-0bb03abf7fc7/volumes" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.855819 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df9b691-1416-4d79-adfb-c00f80cadac4" path="/var/lib/kubelet/pods/8df9b691-1416-4d79-adfb-c00f80cadac4/volumes" Jan 05 23:38:15 crc kubenswrapper[5034]: I0105 23:38:15.856546 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdb7a79-8ad0-4b33-ada0-48c0613e3541" path="/var/lib/kubelet/pods/efdb7a79-8ad0-4b33-ada0-48c0613e3541/volumes" Jan 05 23:38:16 crc kubenswrapper[5034]: I0105 23:38:16.196583 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:38:16 crc kubenswrapper[5034]: I0105 23:38:16.196997 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:38:16 crc kubenswrapper[5034]: I0105 23:38:16.377801 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 05 23:38:16 crc kubenswrapper[5034]: W0105 23:38:16.378254 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92dfe6ee_f148_404b_86f1_e6c5c5893732.slice/crio-4b1d90d1f5e415d1ec6970fa255deec2767bf6c74f4755113d6db3838defde39 WatchSource:0}: Error finding container 4b1d90d1f5e415d1ec6970fa255deec2767bf6c74f4755113d6db3838defde39: Status 404 returned error can't find the container with id 4b1d90d1f5e415d1ec6970fa255deec2767bf6c74f4755113d6db3838defde39 Jan 05 23:38:17 crc kubenswrapper[5034]: I0105 23:38:17.018374 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"92dfe6ee-f148-404b-86f1-e6c5c5893732","Type":"ContainerStarted","Data":"4b1d90d1f5e415d1ec6970fa255deec2767bf6c74f4755113d6db3838defde39"} Jan 05 23:38:17 crc kubenswrapper[5034]: I0105 23:38:17.261153 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-klqvq" podUID="09d918a1-ce14-4606-829e-af0575fa3505" containerName="registry-server" probeResult="failure" output=< Jan 05 23:38:17 crc kubenswrapper[5034]: timeout: failed to connect service ":50051" within 1s Jan 05 23:38:17 crc kubenswrapper[5034]: > Jan 05 23:38:18 crc kubenswrapper[5034]: I0105 23:38:18.031159 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"92dfe6ee-f148-404b-86f1-e6c5c5893732","Type":"ContainerStarted","Data":"f55953e92b6565d431c7320114197a89a82b81939defe67c75d4f29903b50d3e"} Jan 05 23:38:18 crc kubenswrapper[5034]: I0105 23:38:18.078481 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:18 crc kubenswrapper[5034]: I0105 23:38:18.078818 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="ceilometer-central-agent" containerID="cri-o://7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0" gracePeriod=30 Jan 05 23:38:18 crc kubenswrapper[5034]: I0105 23:38:18.079408 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="sg-core" containerID="cri-o://d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5" gracePeriod=30 Jan 05 23:38:18 crc kubenswrapper[5034]: I0105 23:38:18.079474 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="proxy-httpd" containerID="cri-o://34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1" gracePeriod=30 Jan 05 23:38:18 crc kubenswrapper[5034]: I0105 23:38:18.079480 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="ceilometer-notification-agent" containerID="cri-o://079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5" gracePeriod=30 Jan 05 23:38:18 crc kubenswrapper[5034]: I0105 23:38:18.893470 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 05 23:38:19 crc kubenswrapper[5034]: I0105 23:38:19.044529 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"92dfe6ee-f148-404b-86f1-e6c5c5893732","Type":"ContainerStarted","Data":"74b3f867d3d738cdc14271a06f29e017c7d9c0b3ce2e69acb5e6ee2434529458"} Jan 05 23:38:19 crc kubenswrapper[5034]: I0105 23:38:19.047488 5034 generic.go:334] "Generic (PLEG): container finished" podID="1d0f7073-d292-4943-8245-22b8485cfee9" containerID="34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1" exitCode=0 Jan 05 23:38:19 crc kubenswrapper[5034]: I0105 23:38:19.047517 5034 generic.go:334] "Generic (PLEG): container finished" podID="1d0f7073-d292-4943-8245-22b8485cfee9" containerID="d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5" exitCode=2 Jan 05 23:38:19 crc kubenswrapper[5034]: I0105 23:38:19.047528 5034 generic.go:334] "Generic (PLEG): container finished" podID="1d0f7073-d292-4943-8245-22b8485cfee9" containerID="7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0" exitCode=0 Jan 05 23:38:19 crc kubenswrapper[5034]: I0105 23:38:19.047548 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d0f7073-d292-4943-8245-22b8485cfee9","Type":"ContainerDied","Data":"34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1"} Jan 05 23:38:19 crc kubenswrapper[5034]: I0105 23:38:19.047571 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d0f7073-d292-4943-8245-22b8485cfee9","Type":"ContainerDied","Data":"d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5"} Jan 05 23:38:19 crc kubenswrapper[5034]: I0105 23:38:19.047583 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d0f7073-d292-4943-8245-22b8485cfee9","Type":"ContainerDied","Data":"7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0"} Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.129066 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.160193 5034 generic.go:334] "Generic (PLEG): container finished" podID="1d0f7073-d292-4943-8245-22b8485cfee9" containerID="079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5" exitCode=0 Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.160284 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d0f7073-d292-4943-8245-22b8485cfee9","Type":"ContainerDied","Data":"079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5"} Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.160328 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d0f7073-d292-4943-8245-22b8485cfee9","Type":"ContainerDied","Data":"b870326998f5178b34f171c8f37233ede4069661e0c06fe93e461ad9b752b4a6"} Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.160353 5034 scope.go:117] "RemoveContainer" containerID="34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.165667 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"92dfe6ee-f148-404b-86f1-e6c5c5893732","Type":"ContainerStarted","Data":"e3a2eea89a6d50a81b766d77fe26e1952d7a0387d886e32366855808048924ec"} Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.198552 5034 scope.go:117] "RemoveContainer" containerID="d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.250556 5034 scope.go:117] "RemoveContainer" containerID="079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.255453 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-sg-core-conf-yaml\") pod \"1d0f7073-d292-4943-8245-22b8485cfee9\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.255519 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-combined-ca-bundle\") pod \"1d0f7073-d292-4943-8245-22b8485cfee9\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.255610 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-scripts\") pod \"1d0f7073-d292-4943-8245-22b8485cfee9\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.255803 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-run-httpd\") pod \"1d0f7073-d292-4943-8245-22b8485cfee9\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.255854 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5bwr\" (UniqueName: \"kubernetes.io/projected/1d0f7073-d292-4943-8245-22b8485cfee9-kube-api-access-j5bwr\") pod \"1d0f7073-d292-4943-8245-22b8485cfee9\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.255881 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-log-httpd\") pod \"1d0f7073-d292-4943-8245-22b8485cfee9\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.255908 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-config-data\") pod \"1d0f7073-d292-4943-8245-22b8485cfee9\" (UID: \"1d0f7073-d292-4943-8245-22b8485cfee9\") " Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.256823 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1d0f7073-d292-4943-8245-22b8485cfee9" (UID: "1d0f7073-d292-4943-8245-22b8485cfee9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.257880 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1d0f7073-d292-4943-8245-22b8485cfee9" (UID: "1d0f7073-d292-4943-8245-22b8485cfee9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.263789 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d0f7073-d292-4943-8245-22b8485cfee9-kube-api-access-j5bwr" (OuterVolumeSpecName: "kube-api-access-j5bwr") pod "1d0f7073-d292-4943-8245-22b8485cfee9" (UID: "1d0f7073-d292-4943-8245-22b8485cfee9"). InnerVolumeSpecName "kube-api-access-j5bwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.263798 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-scripts" (OuterVolumeSpecName: "scripts") pod "1d0f7073-d292-4943-8245-22b8485cfee9" (UID: "1d0f7073-d292-4943-8245-22b8485cfee9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.264658 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.273190 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.285990 5034 scope.go:117] "RemoveContainer" containerID="7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.307714 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1d0f7073-d292-4943-8245-22b8485cfee9" (UID: "1d0f7073-d292-4943-8245-22b8485cfee9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.337850 5034 scope.go:117] "RemoveContainer" containerID="34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1" Jan 05 23:38:21 crc kubenswrapper[5034]: E0105 23:38:21.338367 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1\": container with ID starting with 34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1 not found: ID does not exist" containerID="34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.338426 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1"} err="failed to get container status \"34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1\": rpc error: code = NotFound desc = could not find container \"34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1\": container with ID starting with 34faee029d0477a7ba3ab584f24e78cc1a12431f0ca4207b62baba683e1a99a1 not found: ID does not exist" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.338462 5034 scope.go:117] "RemoveContainer" containerID="d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5" Jan 05 23:38:21 crc kubenswrapper[5034]: E0105 23:38:21.338727 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5\": container with ID starting with d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5 not found: ID does not exist" containerID="d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.338752 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5"} err="failed to get container status \"d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5\": rpc error: code = NotFound desc = could not find container \"d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5\": container with ID starting with d0108247db9bb543bc29a222ec00fb20601578eba258e41d45a80b221ce17ef5 not found: ID does not exist" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.338767 5034 scope.go:117] "RemoveContainer" containerID="079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5" Jan 05 23:38:21 crc kubenswrapper[5034]: E0105 23:38:21.338969 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5\": container with ID starting with 079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5 not found: ID does not exist" containerID="079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.338995 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5"} err="failed to get container status \"079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5\": rpc error: code = NotFound desc = could not find container \"079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5\": container with ID starting with 079ea3e81565a632e22889ce94513601eae224b2c6ab1866dcd143ff834376a5 not found: ID does not exist" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.339011 5034 scope.go:117] "RemoveContainer" containerID="7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0" Jan 05 23:38:21 crc kubenswrapper[5034]: E0105 23:38:21.339287 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0\": container with ID starting with 7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0 not found: ID does not exist" containerID="7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.339310 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0"} err="failed to get container status \"7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0\": rpc error: code = NotFound desc = could not find container \"7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0\": container with ID starting with 7bb0f46b19e1877545fa20e661097c93ed3c1bddf745f6aacf809ac2dceb1ad0 not found: ID does not exist" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.360899 5034 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.360943 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.360953 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.360961 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5bwr\" (UniqueName: \"kubernetes.io/projected/1d0f7073-d292-4943-8245-22b8485cfee9-kube-api-access-j5bwr\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.360975 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d0f7073-d292-4943-8245-22b8485cfee9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.376096 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d0f7073-d292-4943-8245-22b8485cfee9" (UID: "1d0f7073-d292-4943-8245-22b8485cfee9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.420714 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-config-data" (OuterVolumeSpecName: "config-data") pod "1d0f7073-d292-4943-8245-22b8485cfee9" (UID: "1d0f7073-d292-4943-8245-22b8485cfee9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.463145 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:21 crc kubenswrapper[5034]: I0105 23:38:21.463194 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0f7073-d292-4943-8245-22b8485cfee9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.178333 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.183736 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.211851 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.241140 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.271026 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:22 crc kubenswrapper[5034]: E0105 23:38:22.271872 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="ceilometer-notification-agent" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.271981 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="ceilometer-notification-agent" Jan 05 23:38:22 crc kubenswrapper[5034]: E0105 23:38:22.274695 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="sg-core" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.274813 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="sg-core" Jan 05 23:38:22 crc kubenswrapper[5034]: E0105 23:38:22.274894 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="ceilometer-central-agent" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.274955 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="ceilometer-central-agent" Jan 05 23:38:22 crc kubenswrapper[5034]: E0105 23:38:22.275023 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="proxy-httpd" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.275095 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="proxy-httpd" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.275468 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="ceilometer-notification-agent" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.275535 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="ceilometer-central-agent" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.275623 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="sg-core" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.275702 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" containerName="proxy-httpd" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.278045 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.286922 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.287238 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.307924 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.390315 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsl46\" (UniqueName: \"kubernetes.io/projected/dfa813b2-8376-4288-ae01-fffe59bcb75a-kube-api-access-hsl46\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.390404 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-log-httpd\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.390571 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-scripts\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.390600 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-run-httpd\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.390658 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.390798 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-config-data\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.390853 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.492857 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.492921 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-config-data\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.492946 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.493041 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsl46\" (UniqueName: \"kubernetes.io/projected/dfa813b2-8376-4288-ae01-fffe59bcb75a-kube-api-access-hsl46\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.493069 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-log-httpd\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.493161 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-scripts\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.493186 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-run-httpd\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.493691 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-run-httpd\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.495676 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-log-httpd\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.498814 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-config-data\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.506809 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-scripts\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.506818 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.506821 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.515999 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsl46\" (UniqueName: \"kubernetes.io/projected/dfa813b2-8376-4288-ae01-fffe59bcb75a-kube-api-access-hsl46\") pod \"ceilometer-0\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " pod="openstack/ceilometer-0" Jan 05 23:38:22 crc kubenswrapper[5034]: I0105 23:38:22.617118 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.149859 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.191066 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"92dfe6ee-f148-404b-86f1-e6c5c5893732","Type":"ContainerStarted","Data":"3914db75d254f1556b8d895d4c5980de455100df7d5493a4f365d665799c3efe"} Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.191197 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-api" containerID="cri-o://f55953e92b6565d431c7320114197a89a82b81939defe67c75d4f29903b50d3e" gracePeriod=30 Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.191254 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-listener" containerID="cri-o://3914db75d254f1556b8d895d4c5980de455100df7d5493a4f365d665799c3efe" gracePeriod=30 Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.191298 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-notifier" containerID="cri-o://e3a2eea89a6d50a81b766d77fe26e1952d7a0387d886e32366855808048924ec" gracePeriod=30 Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.191342 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-evaluator" containerID="cri-o://74b3f867d3d738cdc14271a06f29e017c7d9c0b3ce2e69acb5e6ee2434529458" gracePeriod=30 Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.197932 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfa813b2-8376-4288-ae01-fffe59bcb75a","Type":"ContainerStarted","Data":"571172ca17b961040a86c96a4478649658976d5a565d249433e34edb9d13d01d"} Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.215786 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.126075945 podStartE2EDuration="8.215757816s" podCreationTimestamp="2026-01-05 23:38:15 +0000 UTC" firstStartedPulling="2026-01-05 23:38:16.381586649 +0000 UTC m=+6388.753586098" lastFinishedPulling="2026-01-05 23:38:22.47126853 +0000 UTC m=+6394.843267969" observedRunningTime="2026-01-05 23:38:23.212645558 +0000 UTC m=+6395.584645007" watchObservedRunningTime="2026-01-05 23:38:23.215757816 +0000 UTC m=+6395.587757255" Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.357770 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.358040 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9a16889f-0260-4fcc-8567-81a3da64667d" containerName="kube-state-metrics" containerID="cri-o://a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690" gracePeriod=30 Jan 05 23:38:23 crc kubenswrapper[5034]: I0105 23:38:23.871502 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d0f7073-d292-4943-8245-22b8485cfee9" path="/var/lib/kubelet/pods/1d0f7073-d292-4943-8245-22b8485cfee9/volumes" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.003998 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.145012 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb8bf\" (UniqueName: \"kubernetes.io/projected/9a16889f-0260-4fcc-8567-81a3da64667d-kube-api-access-qb8bf\") pod \"9a16889f-0260-4fcc-8567-81a3da64667d\" (UID: \"9a16889f-0260-4fcc-8567-81a3da64667d\") " Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.156117 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a16889f-0260-4fcc-8567-81a3da64667d-kube-api-access-qb8bf" (OuterVolumeSpecName: "kube-api-access-qb8bf") pod "9a16889f-0260-4fcc-8567-81a3da64667d" (UID: "9a16889f-0260-4fcc-8567-81a3da64667d"). InnerVolumeSpecName "kube-api-access-qb8bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.232931 5034 generic.go:334] "Generic (PLEG): container finished" podID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerID="e3a2eea89a6d50a81b766d77fe26e1952d7a0387d886e32366855808048924ec" exitCode=0 Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.232981 5034 generic.go:334] "Generic (PLEG): container finished" podID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerID="74b3f867d3d738cdc14271a06f29e017c7d9c0b3ce2e69acb5e6ee2434529458" exitCode=0 Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.232990 5034 generic.go:334] "Generic (PLEG): container finished" podID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerID="f55953e92b6565d431c7320114197a89a82b81939defe67c75d4f29903b50d3e" exitCode=0 Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.233035 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"92dfe6ee-f148-404b-86f1-e6c5c5893732","Type":"ContainerDied","Data":"e3a2eea89a6d50a81b766d77fe26e1952d7a0387d886e32366855808048924ec"} Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.233145 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"92dfe6ee-f148-404b-86f1-e6c5c5893732","Type":"ContainerDied","Data":"74b3f867d3d738cdc14271a06f29e017c7d9c0b3ce2e69acb5e6ee2434529458"} Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.233161 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"92dfe6ee-f148-404b-86f1-e6c5c5893732","Type":"ContainerDied","Data":"f55953e92b6565d431c7320114197a89a82b81939defe67c75d4f29903b50d3e"} Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.241052 5034 generic.go:334] "Generic (PLEG): container finished" podID="9a16889f-0260-4fcc-8567-81a3da64667d" containerID="a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690" exitCode=2 Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.241187 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9a16889f-0260-4fcc-8567-81a3da64667d","Type":"ContainerDied","Data":"a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690"} Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.241217 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.241263 5034 scope.go:117] "RemoveContainer" containerID="a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.241242 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9a16889f-0260-4fcc-8567-81a3da64667d","Type":"ContainerDied","Data":"3d1df2c93d71dd02ea6c9c15998c80d17938f75578417b68ad051c739fbed755"} Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.248529 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfa813b2-8376-4288-ae01-fffe59bcb75a","Type":"ContainerStarted","Data":"f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707"} Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.250851 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb8bf\" (UniqueName: \"kubernetes.io/projected/9a16889f-0260-4fcc-8567-81a3da64667d-kube-api-access-qb8bf\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.290878 5034 scope.go:117] "RemoveContainer" containerID="a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690" Jan 05 23:38:24 crc kubenswrapper[5034]: E0105 23:38:24.310629 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690\": container with ID starting with a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690 not found: ID does not exist" containerID="a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.310684 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690"} err="failed to get container status \"a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690\": rpc error: code = NotFound desc = could not find container \"a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690\": container with ID starting with a36cc957d36683146121abe40e26cc9b568598921ae9c7c1207b377581e25690 not found: ID does not exist" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.322521 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.342332 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.351744 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 23:38:24 crc kubenswrapper[5034]: E0105 23:38:24.352570 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a16889f-0260-4fcc-8567-81a3da64667d" containerName="kube-state-metrics" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.352594 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a16889f-0260-4fcc-8567-81a3da64667d" containerName="kube-state-metrics" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.352832 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a16889f-0260-4fcc-8567-81a3da64667d" containerName="kube-state-metrics" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.355216 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.362694 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.363002 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.363396 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.455336 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1e73955a-5f4a-4db0-a0ae-3844971de40d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.455475 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e73955a-5f4a-4db0-a0ae-3844971de40d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.455508 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e73955a-5f4a-4db0-a0ae-3844971de40d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.455531 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g22rq\" (UniqueName: \"kubernetes.io/projected/1e73955a-5f4a-4db0-a0ae-3844971de40d-kube-api-access-g22rq\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.558512 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1e73955a-5f4a-4db0-a0ae-3844971de40d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.558625 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e73955a-5f4a-4db0-a0ae-3844971de40d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.558665 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g22rq\" (UniqueName: \"kubernetes.io/projected/1e73955a-5f4a-4db0-a0ae-3844971de40d-kube-api-access-g22rq\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.558699 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e73955a-5f4a-4db0-a0ae-3844971de40d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.563950 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1e73955a-5f4a-4db0-a0ae-3844971de40d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.564644 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e73955a-5f4a-4db0-a0ae-3844971de40d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.564696 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e73955a-5f4a-4db0-a0ae-3844971de40d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.576466 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g22rq\" (UniqueName: \"kubernetes.io/projected/1e73955a-5f4a-4db0-a0ae-3844971de40d-kube-api-access-g22rq\") pod \"kube-state-metrics-0\" (UID: \"1e73955a-5f4a-4db0-a0ae-3844971de40d\") " pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.697111 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 23:38:24 crc kubenswrapper[5034]: I0105 23:38:24.840367 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:38:24 crc kubenswrapper[5034]: E0105 23:38:24.840921 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:38:25 crc kubenswrapper[5034]: I0105 23:38:25.252320 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 23:38:25 crc kubenswrapper[5034]: W0105 23:38:25.253827 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e73955a_5f4a_4db0_a0ae_3844971de40d.slice/crio-7295a9f7484ce4e385fe9566ff30661fb55b08918da70924c53cd5410e77e56d WatchSource:0}: Error finding container 7295a9f7484ce4e385fe9566ff30661fb55b08918da70924c53cd5410e77e56d: Status 404 returned error can't find the container with id 7295a9f7484ce4e385fe9566ff30661fb55b08918da70924c53cd5410e77e56d Jan 05 23:38:25 crc kubenswrapper[5034]: I0105 23:38:25.265029 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfa813b2-8376-4288-ae01-fffe59bcb75a","Type":"ContainerStarted","Data":"f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a"} Jan 05 23:38:25 crc kubenswrapper[5034]: I0105 23:38:25.789315 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:25 crc kubenswrapper[5034]: I0105 23:38:25.853697 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a16889f-0260-4fcc-8567-81a3da64667d" path="/var/lib/kubelet/pods/9a16889f-0260-4fcc-8567-81a3da64667d/volumes" Jan 05 23:38:26 crc kubenswrapper[5034]: I0105 23:38:26.254995 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:38:26 crc kubenswrapper[5034]: I0105 23:38:26.282067 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e73955a-5f4a-4db0-a0ae-3844971de40d","Type":"ContainerStarted","Data":"7e3027f21a90f83c4e766f486ff594653f9efdd92d91ac6cd22b5fa0b4938f25"} Jan 05 23:38:26 crc kubenswrapper[5034]: I0105 23:38:26.282136 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e73955a-5f4a-4db0-a0ae-3844971de40d","Type":"ContainerStarted","Data":"7295a9f7484ce4e385fe9566ff30661fb55b08918da70924c53cd5410e77e56d"} Jan 05 23:38:26 crc kubenswrapper[5034]: I0105 23:38:26.282718 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 05 23:38:26 crc kubenswrapper[5034]: I0105 23:38:26.291201 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfa813b2-8376-4288-ae01-fffe59bcb75a","Type":"ContainerStarted","Data":"0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b"} Jan 05 23:38:26 crc kubenswrapper[5034]: I0105 23:38:26.304394 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.942833377 podStartE2EDuration="2.304366525s" podCreationTimestamp="2026-01-05 23:38:24 +0000 UTC" firstStartedPulling="2026-01-05 23:38:25.259106253 +0000 UTC m=+6397.631105692" lastFinishedPulling="2026-01-05 23:38:25.620639401 +0000 UTC m=+6397.992638840" observedRunningTime="2026-01-05 23:38:26.301791201 +0000 UTC m=+6398.673790640" watchObservedRunningTime="2026-01-05 23:38:26.304366525 +0000 UTC m=+6398.676365964" Jan 05 23:38:26 crc kubenswrapper[5034]: I0105 23:38:26.317239 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:38:27 crc kubenswrapper[5034]: I0105 23:38:27.049190 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klqvq"] Jan 05 23:38:27 crc kubenswrapper[5034]: I0105 23:38:27.300041 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-klqvq" podUID="09d918a1-ce14-4606-829e-af0575fa3505" containerName="registry-server" containerID="cri-o://24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297" gracePeriod=2 Jan 05 23:38:27 crc kubenswrapper[5034]: I0105 23:38:27.816359 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:38:27 crc kubenswrapper[5034]: I0105 23:38:27.957016 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-catalog-content\") pod \"09d918a1-ce14-4606-829e-af0575fa3505\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " Jan 05 23:38:27 crc kubenswrapper[5034]: I0105 23:38:27.957165 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-utilities\") pod \"09d918a1-ce14-4606-829e-af0575fa3505\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " Jan 05 23:38:27 crc kubenswrapper[5034]: I0105 23:38:27.957192 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mswtz\" (UniqueName: \"kubernetes.io/projected/09d918a1-ce14-4606-829e-af0575fa3505-kube-api-access-mswtz\") pod \"09d918a1-ce14-4606-829e-af0575fa3505\" (UID: \"09d918a1-ce14-4606-829e-af0575fa3505\") " Jan 05 23:38:27 crc kubenswrapper[5034]: I0105 23:38:27.958024 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-utilities" (OuterVolumeSpecName: "utilities") pod "09d918a1-ce14-4606-829e-af0575fa3505" (UID: "09d918a1-ce14-4606-829e-af0575fa3505"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:38:27 crc kubenswrapper[5034]: I0105 23:38:27.980765 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d918a1-ce14-4606-829e-af0575fa3505-kube-api-access-mswtz" (OuterVolumeSpecName: "kube-api-access-mswtz") pod "09d918a1-ce14-4606-829e-af0575fa3505" (UID: "09d918a1-ce14-4606-829e-af0575fa3505"). InnerVolumeSpecName "kube-api-access-mswtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.061047 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.061103 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mswtz\" (UniqueName: \"kubernetes.io/projected/09d918a1-ce14-4606-829e-af0575fa3505-kube-api-access-mswtz\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.081012 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09d918a1-ce14-4606-829e-af0575fa3505" (UID: "09d918a1-ce14-4606-829e-af0575fa3505"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.163695 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d918a1-ce14-4606-829e-af0575fa3505-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.313631 5034 generic.go:334] "Generic (PLEG): container finished" podID="09d918a1-ce14-4606-829e-af0575fa3505" containerID="24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297" exitCode=0 Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.313729 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klqvq" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.313758 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klqvq" event={"ID":"09d918a1-ce14-4606-829e-af0575fa3505","Type":"ContainerDied","Data":"24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297"} Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.314254 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klqvq" event={"ID":"09d918a1-ce14-4606-829e-af0575fa3505","Type":"ContainerDied","Data":"4b6d12c32a9d14ca5172dc5ef170bdeca6e984f33515309d736ca7816754fe5b"} Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.314302 5034 scope.go:117] "RemoveContainer" containerID="24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.318442 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfa813b2-8376-4288-ae01-fffe59bcb75a","Type":"ContainerStarted","Data":"0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7"} Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.318681 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="proxy-httpd" containerID="cri-o://0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7" gracePeriod=30 Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.318686 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="ceilometer-central-agent" containerID="cri-o://f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707" gracePeriod=30 Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.318805 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.318708 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="ceilometer-notification-agent" containerID="cri-o://f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a" gracePeriod=30 Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.318708 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="sg-core" containerID="cri-o://0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b" gracePeriod=30 Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.347461 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.083917598 podStartE2EDuration="6.347439374s" podCreationTimestamp="2026-01-05 23:38:22 +0000 UTC" firstStartedPulling="2026-01-05 23:38:23.150500556 +0000 UTC m=+6395.522499995" lastFinishedPulling="2026-01-05 23:38:27.414022322 +0000 UTC m=+6399.786021771" observedRunningTime="2026-01-05 23:38:28.341971559 +0000 UTC m=+6400.713970998" watchObservedRunningTime="2026-01-05 23:38:28.347439374 +0000 UTC m=+6400.719438803" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.355902 5034 scope.go:117] "RemoveContainer" containerID="ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.381066 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klqvq"] Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.392354 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-klqvq"] Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.415725 5034 scope.go:117] "RemoveContainer" containerID="2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.445178 5034 scope.go:117] "RemoveContainer" containerID="24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297" Jan 05 23:38:28 crc kubenswrapper[5034]: E0105 23:38:28.445760 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297\": container with ID starting with 24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297 not found: ID does not exist" containerID="24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.445813 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297"} err="failed to get container status \"24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297\": rpc error: code = NotFound desc = could not find container \"24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297\": container with ID starting with 24f14e61c762b482e762ac07d28b38f285d6ec797a32c8440cc90870a8b74297 not found: ID does not exist" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.445845 5034 scope.go:117] "RemoveContainer" containerID="ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005" Jan 05 23:38:28 crc kubenswrapper[5034]: E0105 23:38:28.446286 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005\": container with ID starting with ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005 not found: ID does not exist" containerID="ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.446347 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005"} err="failed to get container status \"ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005\": rpc error: code = NotFound desc = could not find container \"ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005\": container with ID starting with ab2d767fba9f31206ee1c7ebf5cc8c21421972ea68b1c259619bbf3faaa1b005 not found: ID does not exist" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.446386 5034 scope.go:117] "RemoveContainer" containerID="2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea" Jan 05 23:38:28 crc kubenswrapper[5034]: E0105 23:38:28.446684 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea\": container with ID starting with 2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea not found: ID does not exist" containerID="2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea" Jan 05 23:38:28 crc kubenswrapper[5034]: I0105 23:38:28.446724 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea"} err="failed to get container status \"2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea\": rpc error: code = NotFound desc = could not find container \"2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea\": container with ID starting with 2ed576edb24075ac78e3af1156c8cec3a47b7c5a0e90fd643fa91cd1171941ea not found: ID does not exist" Jan 05 23:38:29 crc kubenswrapper[5034]: I0105 23:38:29.341059 5034 generic.go:334] "Generic (PLEG): container finished" podID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerID="0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7" exitCode=0 Jan 05 23:38:29 crc kubenswrapper[5034]: I0105 23:38:29.341528 5034 generic.go:334] "Generic (PLEG): container finished" podID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerID="0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b" exitCode=2 Jan 05 23:38:29 crc kubenswrapper[5034]: I0105 23:38:29.341538 5034 generic.go:334] "Generic (PLEG): container finished" podID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerID="f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a" exitCode=0 Jan 05 23:38:29 crc kubenswrapper[5034]: I0105 23:38:29.341566 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfa813b2-8376-4288-ae01-fffe59bcb75a","Type":"ContainerDied","Data":"0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7"} Jan 05 23:38:29 crc kubenswrapper[5034]: I0105 23:38:29.341614 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfa813b2-8376-4288-ae01-fffe59bcb75a","Type":"ContainerDied","Data":"0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b"} Jan 05 23:38:29 crc kubenswrapper[5034]: I0105 23:38:29.341626 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfa813b2-8376-4288-ae01-fffe59bcb75a","Type":"ContainerDied","Data":"f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a"} Jan 05 23:38:29 crc kubenswrapper[5034]: I0105 23:38:29.850807 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d918a1-ce14-4606-829e-af0575fa3505" path="/var/lib/kubelet/pods/09d918a1-ce14-4606-829e-af0575fa3505/volumes" Jan 05 23:38:30 crc kubenswrapper[5034]: I0105 23:38:30.057131 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cqpdk"] Jan 05 23:38:30 crc kubenswrapper[5034]: I0105 23:38:30.071114 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cqpdk"] Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.165446 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.336700 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-log-httpd\") pod \"dfa813b2-8376-4288-ae01-fffe59bcb75a\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.337299 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-combined-ca-bundle\") pod \"dfa813b2-8376-4288-ae01-fffe59bcb75a\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.337391 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dfa813b2-8376-4288-ae01-fffe59bcb75a" (UID: "dfa813b2-8376-4288-ae01-fffe59bcb75a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.337433 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-config-data\") pod \"dfa813b2-8376-4288-ae01-fffe59bcb75a\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.337588 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsl46\" (UniqueName: \"kubernetes.io/projected/dfa813b2-8376-4288-ae01-fffe59bcb75a-kube-api-access-hsl46\") pod \"dfa813b2-8376-4288-ae01-fffe59bcb75a\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.337799 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-run-httpd\") pod \"dfa813b2-8376-4288-ae01-fffe59bcb75a\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.337873 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-sg-core-conf-yaml\") pod \"dfa813b2-8376-4288-ae01-fffe59bcb75a\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.337939 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-scripts\") pod \"dfa813b2-8376-4288-ae01-fffe59bcb75a\" (UID: \"dfa813b2-8376-4288-ae01-fffe59bcb75a\") " Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.338261 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dfa813b2-8376-4288-ae01-fffe59bcb75a" (UID: "dfa813b2-8376-4288-ae01-fffe59bcb75a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.339497 5034 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.339544 5034 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfa813b2-8376-4288-ae01-fffe59bcb75a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.346616 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa813b2-8376-4288-ae01-fffe59bcb75a-kube-api-access-hsl46" (OuterVolumeSpecName: "kube-api-access-hsl46") pod "dfa813b2-8376-4288-ae01-fffe59bcb75a" (UID: "dfa813b2-8376-4288-ae01-fffe59bcb75a"). InnerVolumeSpecName "kube-api-access-hsl46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.347775 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-scripts" (OuterVolumeSpecName: "scripts") pod "dfa813b2-8376-4288-ae01-fffe59bcb75a" (UID: "dfa813b2-8376-4288-ae01-fffe59bcb75a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.366642 5034 generic.go:334] "Generic (PLEG): container finished" podID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerID="f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707" exitCode=0 Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.366712 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfa813b2-8376-4288-ae01-fffe59bcb75a","Type":"ContainerDied","Data":"f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707"} Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.366750 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfa813b2-8376-4288-ae01-fffe59bcb75a","Type":"ContainerDied","Data":"571172ca17b961040a86c96a4478649658976d5a565d249433e34edb9d13d01d"} Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.366776 5034 scope.go:117] "RemoveContainer" containerID="0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.366885 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.376242 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dfa813b2-8376-4288-ae01-fffe59bcb75a" (UID: "dfa813b2-8376-4288-ae01-fffe59bcb75a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.441580 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.441610 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsl46\" (UniqueName: \"kubernetes.io/projected/dfa813b2-8376-4288-ae01-fffe59bcb75a-kube-api-access-hsl46\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.441620 5034 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.447391 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfa813b2-8376-4288-ae01-fffe59bcb75a" (UID: "dfa813b2-8376-4288-ae01-fffe59bcb75a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.466238 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-config-data" (OuterVolumeSpecName: "config-data") pod "dfa813b2-8376-4288-ae01-fffe59bcb75a" (UID: "dfa813b2-8376-4288-ae01-fffe59bcb75a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.469364 5034 scope.go:117] "RemoveContainer" containerID="0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.488043 5034 scope.go:117] "RemoveContainer" containerID="f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.514557 5034 scope.go:117] "RemoveContainer" containerID="f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.536759 5034 scope.go:117] "RemoveContainer" containerID="0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7" Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.537210 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7\": container with ID starting with 0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7 not found: ID does not exist" containerID="0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.537251 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7"} err="failed to get container status \"0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7\": rpc error: code = NotFound desc = could not find container \"0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7\": container with ID starting with 0857a1e8e0d0979630639da15cd4056cdb750acf6f5a0e6f8b3f6541b4e742a7 not found: ID does not exist" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.537279 5034 scope.go:117] "RemoveContainer" containerID="0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b" Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.537474 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b\": container with ID starting with 0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b not found: ID does not exist" containerID="0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.537496 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b"} err="failed to get container status \"0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b\": rpc error: code = NotFound desc = could not find container \"0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b\": container with ID starting with 0bbda3a645e1eeb95b1a52699d3b391fafc15c708ed27067f483b76c4b8de72b not found: ID does not exist" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.537512 5034 scope.go:117] "RemoveContainer" containerID="f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a" Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.537701 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a\": container with ID starting with f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a not found: ID does not exist" containerID="f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.537721 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a"} err="failed to get container status \"f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a\": rpc error: code = NotFound desc = could not find container \"f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a\": container with ID starting with f5177d14a334281a2de77be2a2251b17f4915abe9aad0c1646994c16d478e91a not found: ID does not exist" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.537736 5034 scope.go:117] "RemoveContainer" containerID="f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707" Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.537937 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707\": container with ID starting with f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707 not found: ID does not exist" containerID="f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.537960 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707"} err="failed to get container status \"f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707\": rpc error: code = NotFound desc = could not find container \"f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707\": container with ID starting with f5a9aa428cb8253d41f365811005922e4888d32b5d32d67f812045c5c68f4707 not found: ID does not exist" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.544345 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.544384 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa813b2-8376-4288-ae01-fffe59bcb75a-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.702172 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.725287 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.741774 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.742323 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d918a1-ce14-4606-829e-af0575fa3505" containerName="registry-server" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742346 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d918a1-ce14-4606-829e-af0575fa3505" containerName="registry-server" Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.742360 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d918a1-ce14-4606-829e-af0575fa3505" containerName="extract-utilities" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742367 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d918a1-ce14-4606-829e-af0575fa3505" containerName="extract-utilities" Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.742393 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="sg-core" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742399 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="sg-core" Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.742415 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="ceilometer-central-agent" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742421 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="ceilometer-central-agent" Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.742445 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d918a1-ce14-4606-829e-af0575fa3505" containerName="extract-content" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742451 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d918a1-ce14-4606-829e-af0575fa3505" containerName="extract-content" Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.742467 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="ceilometer-notification-agent" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742474 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="ceilometer-notification-agent" Jan 05 23:38:31 crc kubenswrapper[5034]: E0105 23:38:31.742484 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="proxy-httpd" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742492 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="proxy-httpd" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742700 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="proxy-httpd" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742716 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="sg-core" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742736 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d918a1-ce14-4606-829e-af0575fa3505" containerName="registry-server" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742753 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="ceilometer-central-agent" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.742759 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" containerName="ceilometer-notification-agent" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.744783 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.748164 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.748963 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.749214 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.774505 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.850679 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-scripts\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.850737 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f0499a8-aa42-4255-b00a-02a54e530c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.850805 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-config-data\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.850833 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f0499a8-aa42-4255-b00a-02a54e530c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.850885 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6dj\" (UniqueName: \"kubernetes.io/projected/2f0499a8-aa42-4255-b00a-02a54e530c2c-kube-api-access-fv6dj\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.850912 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.850920 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b12947-70bd-491c-99ad-28221fa1f2a2" path="/var/lib/kubelet/pods/09b12947-70bd-491c-99ad-28221fa1f2a2/volumes" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.850943 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.851041 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.851884 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfa813b2-8376-4288-ae01-fffe59bcb75a" path="/var/lib/kubelet/pods/dfa813b2-8376-4288-ae01-fffe59bcb75a/volumes" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.952904 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.953004 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.953071 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-scripts\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.954109 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f0499a8-aa42-4255-b00a-02a54e530c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.954280 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-config-data\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.954321 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f0499a8-aa42-4255-b00a-02a54e530c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.954359 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6dj\" (UniqueName: \"kubernetes.io/projected/2f0499a8-aa42-4255-b00a-02a54e530c2c-kube-api-access-fv6dj\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.954425 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.954732 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f0499a8-aa42-4255-b00a-02a54e530c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.955023 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f0499a8-aa42-4255-b00a-02a54e530c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.958212 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-scripts\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.958365 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.958787 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.958992 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-config-data\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.971597 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0499a8-aa42-4255-b00a-02a54e530c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:31 crc kubenswrapper[5034]: I0105 23:38:31.972213 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6dj\" (UniqueName: \"kubernetes.io/projected/2f0499a8-aa42-4255-b00a-02a54e530c2c-kube-api-access-fv6dj\") pod \"ceilometer-0\" (UID: \"2f0499a8-aa42-4255-b00a-02a54e530c2c\") " pod="openstack/ceilometer-0" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.130966 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.162303 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z6kxg"] Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.165738 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.205195 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6kxg"] Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.264996 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wkt\" (UniqueName: \"kubernetes.io/projected/34a71901-9922-446e-b97a-f6135be362a0-kube-api-access-j2wkt\") pod \"community-operators-z6kxg\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.265063 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-catalog-content\") pod \"community-operators-z6kxg\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.266954 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-utilities\") pod \"community-operators-z6kxg\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.368986 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-utilities\") pod \"community-operators-z6kxg\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.369425 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wkt\" (UniqueName: \"kubernetes.io/projected/34a71901-9922-446e-b97a-f6135be362a0-kube-api-access-j2wkt\") pod \"community-operators-z6kxg\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.369482 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-catalog-content\") pod \"community-operators-z6kxg\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.369862 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-utilities\") pod \"community-operators-z6kxg\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.370020 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-catalog-content\") pod \"community-operators-z6kxg\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.396899 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wkt\" (UniqueName: \"kubernetes.io/projected/34a71901-9922-446e-b97a-f6135be362a0-kube-api-access-j2wkt\") pod \"community-operators-z6kxg\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.607603 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:32 crc kubenswrapper[5034]: I0105 23:38:32.692237 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 23:38:33 crc kubenswrapper[5034]: I0105 23:38:33.091904 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6kxg"] Jan 05 23:38:33 crc kubenswrapper[5034]: I0105 23:38:33.394849 5034 generic.go:334] "Generic (PLEG): container finished" podID="34a71901-9922-446e-b97a-f6135be362a0" containerID="6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718" exitCode=0 Jan 05 23:38:33 crc kubenswrapper[5034]: I0105 23:38:33.394897 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6kxg" event={"ID":"34a71901-9922-446e-b97a-f6135be362a0","Type":"ContainerDied","Data":"6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718"} Jan 05 23:38:33 crc kubenswrapper[5034]: I0105 23:38:33.395253 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6kxg" event={"ID":"34a71901-9922-446e-b97a-f6135be362a0","Type":"ContainerStarted","Data":"13d833e81097c9b1618779fc7ddb2570689e69673efd9a72fa6182361755c609"} Jan 05 23:38:33 crc kubenswrapper[5034]: I0105 23:38:33.396511 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f0499a8-aa42-4255-b00a-02a54e530c2c","Type":"ContainerStarted","Data":"abeacd9f7f6afc50c899bd39948e3c446f6f82b18f8b2f543200cb1221e8a658"} Jan 05 23:38:34 crc kubenswrapper[5034]: I0105 23:38:34.408936 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6kxg" event={"ID":"34a71901-9922-446e-b97a-f6135be362a0","Type":"ContainerStarted","Data":"3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795"} Jan 05 23:38:34 crc kubenswrapper[5034]: I0105 23:38:34.422857 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f0499a8-aa42-4255-b00a-02a54e530c2c","Type":"ContainerStarted","Data":"c8b640ef37cbc41ed0c6218ff33c82a02ff5b64df39da972b2c8b9ab5210efa4"} Jan 05 23:38:34 crc kubenswrapper[5034]: I0105 23:38:34.422914 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f0499a8-aa42-4255-b00a-02a54e530c2c","Type":"ContainerStarted","Data":"765ca334b70931a636f6c7fc7030a64be39b427ebf51ede0f92dd37e43e9f38d"} Jan 05 23:38:34 crc kubenswrapper[5034]: I0105 23:38:34.710786 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 05 23:38:35 crc kubenswrapper[5034]: I0105 23:38:35.434730 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f0499a8-aa42-4255-b00a-02a54e530c2c","Type":"ContainerStarted","Data":"3b31f227bdee9a201caa1487ac7bfa5f8e1c15bf43216cbfadab3366e7e65719"} Jan 05 23:38:35 crc kubenswrapper[5034]: I0105 23:38:35.437890 5034 generic.go:334] "Generic (PLEG): container finished" podID="34a71901-9922-446e-b97a-f6135be362a0" containerID="3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795" exitCode=0 Jan 05 23:38:35 crc kubenswrapper[5034]: I0105 23:38:35.437922 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6kxg" event={"ID":"34a71901-9922-446e-b97a-f6135be362a0","Type":"ContainerDied","Data":"3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795"} Jan 05 23:38:35 crc kubenswrapper[5034]: I0105 23:38:35.838842 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:38:35 crc kubenswrapper[5034]: E0105 23:38:35.839160 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:38:36 crc kubenswrapper[5034]: I0105 23:38:36.449441 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6kxg" event={"ID":"34a71901-9922-446e-b97a-f6135be362a0","Type":"ContainerStarted","Data":"e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1"} Jan 05 23:38:36 crc kubenswrapper[5034]: I0105 23:38:36.452162 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f0499a8-aa42-4255-b00a-02a54e530c2c","Type":"ContainerStarted","Data":"1e861c95424f13ea7794caea1f7afc7025f874d110fe2a9fbfb1332d02d0a128"} Jan 05 23:38:36 crc kubenswrapper[5034]: I0105 23:38:36.452929 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 23:38:36 crc kubenswrapper[5034]: I0105 23:38:36.478918 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z6kxg" podStartSLOduration=1.914515395 podStartE2EDuration="4.478858806s" podCreationTimestamp="2026-01-05 23:38:32 +0000 UTC" firstStartedPulling="2026-01-05 23:38:33.397030041 +0000 UTC m=+6405.769029470" lastFinishedPulling="2026-01-05 23:38:35.961373442 +0000 UTC m=+6408.333372881" observedRunningTime="2026-01-05 23:38:36.467979806 +0000 UTC m=+6408.839979245" watchObservedRunningTime="2026-01-05 23:38:36.478858806 +0000 UTC m=+6408.850858245" Jan 05 23:38:42 crc kubenswrapper[5034]: I0105 23:38:42.607880 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:42 crc kubenswrapper[5034]: I0105 23:38:42.608435 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:42 crc kubenswrapper[5034]: I0105 23:38:42.659294 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:42 crc kubenswrapper[5034]: I0105 23:38:42.697798 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.391054286 podStartE2EDuration="11.697758262s" podCreationTimestamp="2026-01-05 23:38:31 +0000 UTC" firstStartedPulling="2026-01-05 23:38:32.709644523 +0000 UTC m=+6405.081643962" lastFinishedPulling="2026-01-05 23:38:36.016348509 +0000 UTC m=+6408.388347938" observedRunningTime="2026-01-05 23:38:36.509495599 +0000 UTC m=+6408.881495038" watchObservedRunningTime="2026-01-05 23:38:42.697758262 +0000 UTC m=+6415.069757741" Jan 05 23:38:43 crc kubenswrapper[5034]: I0105 23:38:43.568072 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:43 crc kubenswrapper[5034]: I0105 23:38:43.630165 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6kxg"] Jan 05 23:38:45 crc kubenswrapper[5034]: I0105 23:38:45.538888 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z6kxg" podUID="34a71901-9922-446e-b97a-f6135be362a0" containerName="registry-server" containerID="cri-o://e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1" gracePeriod=2 Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.227062 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.313049 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-utilities\") pod \"34a71901-9922-446e-b97a-f6135be362a0\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.313379 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2wkt\" (UniqueName: \"kubernetes.io/projected/34a71901-9922-446e-b97a-f6135be362a0-kube-api-access-j2wkt\") pod \"34a71901-9922-446e-b97a-f6135be362a0\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.313526 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-catalog-content\") pod \"34a71901-9922-446e-b97a-f6135be362a0\" (UID: \"34a71901-9922-446e-b97a-f6135be362a0\") " Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.313879 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-utilities" (OuterVolumeSpecName: "utilities") pod "34a71901-9922-446e-b97a-f6135be362a0" (UID: "34a71901-9922-446e-b97a-f6135be362a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.314174 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.319167 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a71901-9922-446e-b97a-f6135be362a0-kube-api-access-j2wkt" (OuterVolumeSpecName: "kube-api-access-j2wkt") pod "34a71901-9922-446e-b97a-f6135be362a0" (UID: "34a71901-9922-446e-b97a-f6135be362a0"). InnerVolumeSpecName "kube-api-access-j2wkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.379528 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34a71901-9922-446e-b97a-f6135be362a0" (UID: "34a71901-9922-446e-b97a-f6135be362a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.416066 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a71901-9922-446e-b97a-f6135be362a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.416130 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2wkt\" (UniqueName: \"kubernetes.io/projected/34a71901-9922-446e-b97a-f6135be362a0-kube-api-access-j2wkt\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.552838 5034 generic.go:334] "Generic (PLEG): container finished" podID="34a71901-9922-446e-b97a-f6135be362a0" containerID="e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1" exitCode=0 Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.552898 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6kxg" event={"ID":"34a71901-9922-446e-b97a-f6135be362a0","Type":"ContainerDied","Data":"e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1"} Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.552941 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6kxg" event={"ID":"34a71901-9922-446e-b97a-f6135be362a0","Type":"ContainerDied","Data":"13d833e81097c9b1618779fc7ddb2570689e69673efd9a72fa6182361755c609"} Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.552944 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6kxg" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.552964 5034 scope.go:117] "RemoveContainer" containerID="e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.580864 5034 scope.go:117] "RemoveContainer" containerID="3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.600252 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6kxg"] Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.610258 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z6kxg"] Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.638903 5034 scope.go:117] "RemoveContainer" containerID="6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.667671 5034 scope.go:117] "RemoveContainer" containerID="e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1" Jan 05 23:38:46 crc kubenswrapper[5034]: E0105 23:38:46.668212 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1\": container with ID starting with e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1 not found: ID does not exist" containerID="e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.668261 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1"} err="failed to get container status \"e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1\": rpc error: code = NotFound desc = could not find container \"e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1\": container with ID starting with e8425e6e6a76eef4ff7200c94c0145c3dd16b17b57be6c45a5b034ef497967a1 not found: ID does not exist" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.668291 5034 scope.go:117] "RemoveContainer" containerID="3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795" Jan 05 23:38:46 crc kubenswrapper[5034]: E0105 23:38:46.668604 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795\": container with ID starting with 3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795 not found: ID does not exist" containerID="3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.668635 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795"} err="failed to get container status \"3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795\": rpc error: code = NotFound desc = could not find container \"3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795\": container with ID starting with 3930a41520026471f7b2285e52df6efecadba652814a83085bd8fdcb07276795 not found: ID does not exist" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.668655 5034 scope.go:117] "RemoveContainer" containerID="6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718" Jan 05 23:38:46 crc kubenswrapper[5034]: E0105 23:38:46.669012 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718\": container with ID starting with 6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718 not found: ID does not exist" containerID="6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718" Jan 05 23:38:46 crc kubenswrapper[5034]: I0105 23:38:46.669039 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718"} err="failed to get container status \"6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718\": rpc error: code = NotFound desc = could not find container \"6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718\": container with ID starting with 6209b634a308a02449668609da677c769313d1d8f82b9e74c57a6b16f5fdd718 not found: ID does not exist" Jan 05 23:38:47 crc kubenswrapper[5034]: I0105 23:38:47.855741 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a71901-9922-446e-b97a-f6135be362a0" path="/var/lib/kubelet/pods/34a71901-9922-446e-b97a-f6135be362a0/volumes" Jan 05 23:38:49 crc kubenswrapper[5034]: I0105 23:38:49.067983 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5cwjl"] Jan 05 23:38:49 crc kubenswrapper[5034]: I0105 23:38:49.084034 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5cwjl"] Jan 05 23:38:49 crc kubenswrapper[5034]: I0105 23:38:49.855831 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b19617f-1b10-4ff2-ad1f-f31d20663dcd" path="/var/lib/kubelet/pods/6b19617f-1b10-4ff2-ad1f-f31d20663dcd/volumes" Jan 05 23:38:50 crc kubenswrapper[5034]: I0105 23:38:50.029777 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-m46xl"] Jan 05 23:38:50 crc kubenswrapper[5034]: I0105 23:38:50.041701 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-m46xl"] Jan 05 23:38:50 crc kubenswrapper[5034]: I0105 23:38:50.839087 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:38:50 crc kubenswrapper[5034]: E0105 23:38:50.839435 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:38:51 crc kubenswrapper[5034]: I0105 23:38:51.854632 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d5ebff2-2a88-4059-b5dc-15a654cf534f" path="/var/lib/kubelet/pods/9d5ebff2-2a88-4059-b5dc-15a654cf534f/volumes" Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.624769 5034 generic.go:334] "Generic (PLEG): container finished" podID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerID="3914db75d254f1556b8d895d4c5980de455100df7d5493a4f365d665799c3efe" exitCode=137 Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.624829 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"92dfe6ee-f148-404b-86f1-e6c5c5893732","Type":"ContainerDied","Data":"3914db75d254f1556b8d895d4c5980de455100df7d5493a4f365d665799c3efe"} Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.625301 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"92dfe6ee-f148-404b-86f1-e6c5c5893732","Type":"ContainerDied","Data":"4b1d90d1f5e415d1ec6970fa255deec2767bf6c74f4755113d6db3838defde39"} Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.625330 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b1d90d1f5e415d1ec6970fa255deec2767bf6c74f4755113d6db3838defde39" Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.706440 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.782694 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr2p9\" (UniqueName: \"kubernetes.io/projected/92dfe6ee-f148-404b-86f1-e6c5c5893732-kube-api-access-jr2p9\") pod \"92dfe6ee-f148-404b-86f1-e6c5c5893732\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.782948 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-combined-ca-bundle\") pod \"92dfe6ee-f148-404b-86f1-e6c5c5893732\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.782976 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-scripts\") pod \"92dfe6ee-f148-404b-86f1-e6c5c5893732\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.783060 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-config-data\") pod \"92dfe6ee-f148-404b-86f1-e6c5c5893732\" (UID: \"92dfe6ee-f148-404b-86f1-e6c5c5893732\") " Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.791253 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-scripts" (OuterVolumeSpecName: "scripts") pod "92dfe6ee-f148-404b-86f1-e6c5c5893732" (UID: "92dfe6ee-f148-404b-86f1-e6c5c5893732"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.791335 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfe6ee-f148-404b-86f1-e6c5c5893732-kube-api-access-jr2p9" (OuterVolumeSpecName: "kube-api-access-jr2p9") pod "92dfe6ee-f148-404b-86f1-e6c5c5893732" (UID: "92dfe6ee-f148-404b-86f1-e6c5c5893732"). InnerVolumeSpecName "kube-api-access-jr2p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.886963 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr2p9\" (UniqueName: \"kubernetes.io/projected/92dfe6ee-f148-404b-86f1-e6c5c5893732-kube-api-access-jr2p9\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.887009 5034 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.915244 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92dfe6ee-f148-404b-86f1-e6c5c5893732" (UID: "92dfe6ee-f148-404b-86f1-e6c5c5893732"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.947022 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-config-data" (OuterVolumeSpecName: "config-data") pod "92dfe6ee-f148-404b-86f1-e6c5c5893732" (UID: "92dfe6ee-f148-404b-86f1-e6c5c5893732"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.989157 5034 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:53 crc kubenswrapper[5034]: I0105 23:38:53.989190 5034 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92dfe6ee-f148-404b-86f1-e6c5c5893732-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.634380 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.672220 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.685545 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.700176 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 05 23:38:54 crc kubenswrapper[5034]: E0105 23:38:54.700722 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-listener" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.700747 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-listener" Jan 05 23:38:54 crc kubenswrapper[5034]: E0105 23:38:54.700767 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-notifier" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.700776 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-notifier" Jan 05 23:38:54 crc kubenswrapper[5034]: E0105 23:38:54.700796 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-api" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.700804 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-api" Jan 05 23:38:54 crc kubenswrapper[5034]: E0105 23:38:54.700823 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a71901-9922-446e-b97a-f6135be362a0" containerName="registry-server" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.700830 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a71901-9922-446e-b97a-f6135be362a0" containerName="registry-server" Jan 05 23:38:54 crc kubenswrapper[5034]: E0105 23:38:54.700849 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a71901-9922-446e-b97a-f6135be362a0" containerName="extract-content" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.700857 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a71901-9922-446e-b97a-f6135be362a0" containerName="extract-content" Jan 05 23:38:54 crc kubenswrapper[5034]: E0105 23:38:54.700870 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-evaluator" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.700879 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-evaluator" Jan 05 23:38:54 crc kubenswrapper[5034]: E0105 23:38:54.700918 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a71901-9922-446e-b97a-f6135be362a0" containerName="extract-utilities" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.700927 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a71901-9922-446e-b97a-f6135be362a0" containerName="extract-utilities" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.701173 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-notifier" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.701209 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-evaluator" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.701219 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-listener" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.701236 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" containerName="aodh-api" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.701249 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a71901-9922-446e-b97a-f6135be362a0" containerName="registry-server" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.715252 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.715414 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.720209 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.720398 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.720444 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-stdcb" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.720473 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.720546 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.808322 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-scripts\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.809279 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-internal-tls-certs\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.809429 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-config-data\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.809719 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-combined-ca-bundle\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.809891 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-public-tls-certs\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.810159 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp9sm\" (UniqueName: \"kubernetes.io/projected/00196117-ebbd-4c00-92d7-c2660c0c5b78-kube-api-access-sp9sm\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.913017 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-public-tls-certs\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.913261 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp9sm\" (UniqueName: \"kubernetes.io/projected/00196117-ebbd-4c00-92d7-c2660c0c5b78-kube-api-access-sp9sm\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.913501 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-scripts\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.914318 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-internal-tls-certs\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.914507 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-config-data\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.914655 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-combined-ca-bundle\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.918344 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-combined-ca-bundle\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.918451 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-public-tls-certs\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.918995 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-internal-tls-certs\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.919488 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-scripts\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.929666 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp9sm\" (UniqueName: \"kubernetes.io/projected/00196117-ebbd-4c00-92d7-c2660c0c5b78-kube-api-access-sp9sm\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:54 crc kubenswrapper[5034]: I0105 23:38:54.929724 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00196117-ebbd-4c00-92d7-c2660c0c5b78-config-data\") pod \"aodh-0\" (UID: \"00196117-ebbd-4c00-92d7-c2660c0c5b78\") " pod="openstack/aodh-0" Jan 05 23:38:55 crc kubenswrapper[5034]: I0105 23:38:55.036154 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 23:38:55 crc kubenswrapper[5034]: W0105 23:38:55.517517 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00196117_ebbd_4c00_92d7_c2660c0c5b78.slice/crio-f7cda261d4f2fb1d5f7d6811c530f8b809c46834bcf16ad54fb486b9af1c7de5 WatchSource:0}: Error finding container f7cda261d4f2fb1d5f7d6811c530f8b809c46834bcf16ad54fb486b9af1c7de5: Status 404 returned error can't find the container with id f7cda261d4f2fb1d5f7d6811c530f8b809c46834bcf16ad54fb486b9af1c7de5 Jan 05 23:38:55 crc kubenswrapper[5034]: I0105 23:38:55.518022 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 05 23:38:55 crc kubenswrapper[5034]: I0105 23:38:55.644569 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"00196117-ebbd-4c00-92d7-c2660c0c5b78","Type":"ContainerStarted","Data":"f7cda261d4f2fb1d5f7d6811c530f8b809c46834bcf16ad54fb486b9af1c7de5"} Jan 05 23:38:55 crc kubenswrapper[5034]: I0105 23:38:55.855081 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfe6ee-f148-404b-86f1-e6c5c5893732" path="/var/lib/kubelet/pods/92dfe6ee-f148-404b-86f1-e6c5c5893732/volumes" Jan 05 23:38:56 crc kubenswrapper[5034]: I0105 23:38:56.654987 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"00196117-ebbd-4c00-92d7-c2660c0c5b78","Type":"ContainerStarted","Data":"cd37218f172c5772ade20ceb7234574527d93918ba7e026cc3e8ec1cd863573d"} Jan 05 23:38:57 crc kubenswrapper[5034]: I0105 23:38:57.665972 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"00196117-ebbd-4c00-92d7-c2660c0c5b78","Type":"ContainerStarted","Data":"295a6428853aab180c97b5a6ba8503e45f3ae93befce76baa84d2fb11763f0c5"} Jan 05 23:38:58 crc kubenswrapper[5034]: I0105 23:38:58.690931 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"00196117-ebbd-4c00-92d7-c2660c0c5b78","Type":"ContainerStarted","Data":"d11af459a4beba7447824af6439c5f2ef5e2717344a7728ab31abac5df601d2c"} Jan 05 23:38:58 crc kubenswrapper[5034]: I0105 23:38:58.691484 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"00196117-ebbd-4c00-92d7-c2660c0c5b78","Type":"ContainerStarted","Data":"6fe942326c64f9878ed78466d2df04383e411924c9449eee2aa7433e8103ec9e"} Jan 05 23:38:58 crc kubenswrapper[5034]: I0105 23:38:58.727547 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.154898114 podStartE2EDuration="4.72748902s" podCreationTimestamp="2026-01-05 23:38:54 +0000 UTC" firstStartedPulling="2026-01-05 23:38:55.521010261 +0000 UTC m=+6427.893009700" lastFinishedPulling="2026-01-05 23:38:58.093601167 +0000 UTC m=+6430.465600606" observedRunningTime="2026-01-05 23:38:58.721350895 +0000 UTC m=+6431.093350334" watchObservedRunningTime="2026-01-05 23:38:58.72748902 +0000 UTC m=+6431.099488459" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.370873 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68c4dd6795-kwkqm"] Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.373580 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.376037 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.397626 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68c4dd6795-kwkqm"] Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.473230 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-dns-svc\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.473553 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-sb\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.473658 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-config\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.473726 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxnkb\" (UniqueName: \"kubernetes.io/projected/2c2171b2-fe72-412a-b1e8-0d31b7861f20-kube-api-access-xxnkb\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.473755 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-nb\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.473827 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-openstack-cell1\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.575521 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-dns-svc\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.575597 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-sb\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.575696 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-config\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.575748 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxnkb\" (UniqueName: \"kubernetes.io/projected/2c2171b2-fe72-412a-b1e8-0d31b7861f20-kube-api-access-xxnkb\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.575777 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-nb\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.575857 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-openstack-cell1\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.576801 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-config\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.576801 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-sb\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.577053 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-nb\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.577136 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-dns-svc\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.577142 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-openstack-cell1\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.616062 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxnkb\" (UniqueName: \"kubernetes.io/projected/2c2171b2-fe72-412a-b1e8-0d31b7861f20-kube-api-access-xxnkb\") pod \"dnsmasq-dns-68c4dd6795-kwkqm\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:00 crc kubenswrapper[5034]: I0105 23:39:00.718040 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:01 crc kubenswrapper[5034]: I0105 23:39:01.190183 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68c4dd6795-kwkqm"] Jan 05 23:39:01 crc kubenswrapper[5034]: I0105 23:39:01.719021 5034 generic.go:334] "Generic (PLEG): container finished" podID="2c2171b2-fe72-412a-b1e8-0d31b7861f20" containerID="91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9" exitCode=0 Jan 05 23:39:01 crc kubenswrapper[5034]: I0105 23:39:01.719136 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" event={"ID":"2c2171b2-fe72-412a-b1e8-0d31b7861f20","Type":"ContainerDied","Data":"91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9"} Jan 05 23:39:01 crc kubenswrapper[5034]: I0105 23:39:01.719388 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" event={"ID":"2c2171b2-fe72-412a-b1e8-0d31b7861f20","Type":"ContainerStarted","Data":"fa774715dd75eb4eb9d32705c74481257a20e37a05718e7e9f80ab8c52966661"} Jan 05 23:39:02 crc kubenswrapper[5034]: I0105 23:39:02.139292 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 05 23:39:02 crc kubenswrapper[5034]: I0105 23:39:02.731566 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" event={"ID":"2c2171b2-fe72-412a-b1e8-0d31b7861f20","Type":"ContainerStarted","Data":"c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4"} Jan 05 23:39:02 crc kubenswrapper[5034]: I0105 23:39:02.732739 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:02 crc kubenswrapper[5034]: I0105 23:39:02.755995 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" podStartSLOduration=2.755967864 podStartE2EDuration="2.755967864s" podCreationTimestamp="2026-01-05 23:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:39:02.753951667 +0000 UTC m=+6435.125951106" watchObservedRunningTime="2026-01-05 23:39:02.755967864 +0000 UTC m=+6435.127967303" Jan 05 23:39:03 crc kubenswrapper[5034]: I0105 23:39:03.839308 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:39:03 crc kubenswrapper[5034]: E0105 23:39:03.839804 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:39:08 crc kubenswrapper[5034]: I0105 23:39:08.047458 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qxmd7"] Jan 05 23:39:08 crc kubenswrapper[5034]: I0105 23:39:08.061177 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qxmd7"] Jan 05 23:39:09 crc kubenswrapper[5034]: I0105 23:39:09.854533 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcf16341-2b6a-4d34-b363-09567d1148e9" path="/var/lib/kubelet/pods/bcf16341-2b6a-4d34-b363-09567d1148e9/volumes" Jan 05 23:39:10 crc kubenswrapper[5034]: I0105 23:39:10.719925 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:10 crc kubenswrapper[5034]: I0105 23:39:10.796316 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8689fb8b95-928kk"] Jan 05 23:39:10 crc kubenswrapper[5034]: I0105 23:39:10.796650 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" podUID="4a51c571-ded5-40bc-a904-e3ed1dc7affb" containerName="dnsmasq-dns" containerID="cri-o://a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7" gracePeriod=10 Jan 05 23:39:10 crc kubenswrapper[5034]: I0105 23:39:10.972719 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-694df96497-fdnmh"] Jan 05 23:39:10 crc kubenswrapper[5034]: I0105 23:39:10.979503 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:10 crc kubenswrapper[5034]: I0105 23:39:10.992971 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-694df96497-fdnmh"] Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.038112 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-openstack-cell1\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.038179 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-ovsdbserver-sb\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.038272 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-config\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.038342 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-ovsdbserver-nb\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.038367 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-dns-svc\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.038399 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg26j\" (UniqueName: \"kubernetes.io/projected/62100b1d-d7f0-4e4c-ad84-f756873c21ca-kube-api-access-qg26j\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.140326 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-ovsdbserver-nb\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.140388 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-dns-svc\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.140430 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg26j\" (UniqueName: \"kubernetes.io/projected/62100b1d-d7f0-4e4c-ad84-f756873c21ca-kube-api-access-qg26j\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.140514 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-openstack-cell1\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.140552 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-ovsdbserver-sb\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.140635 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-config\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.141504 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-config\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.141609 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-ovsdbserver-nb\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.141671 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-dns-svc\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.142305 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-openstack-cell1\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.142340 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62100b1d-d7f0-4e4c-ad84-f756873c21ca-ovsdbserver-sb\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.167277 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg26j\" (UniqueName: \"kubernetes.io/projected/62100b1d-d7f0-4e4c-ad84-f756873c21ca-kube-api-access-qg26j\") pod \"dnsmasq-dns-694df96497-fdnmh\" (UID: \"62100b1d-d7f0-4e4c-ad84-f756873c21ca\") " pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.321509 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.482258 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.561305 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-dns-svc\") pod \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.561527 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-nb\") pod \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.561678 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sx6q\" (UniqueName: \"kubernetes.io/projected/4a51c571-ded5-40bc-a904-e3ed1dc7affb-kube-api-access-4sx6q\") pod \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.561724 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-config\") pod \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.561758 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-sb\") pod \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\" (UID: \"4a51c571-ded5-40bc-a904-e3ed1dc7affb\") " Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.595834 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a51c571-ded5-40bc-a904-e3ed1dc7affb-kube-api-access-4sx6q" (OuterVolumeSpecName: "kube-api-access-4sx6q") pod "4a51c571-ded5-40bc-a904-e3ed1dc7affb" (UID: "4a51c571-ded5-40bc-a904-e3ed1dc7affb"). InnerVolumeSpecName "kube-api-access-4sx6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.666756 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sx6q\" (UniqueName: \"kubernetes.io/projected/4a51c571-ded5-40bc-a904-e3ed1dc7affb-kube-api-access-4sx6q\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.668129 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a51c571-ded5-40bc-a904-e3ed1dc7affb" (UID: "4a51c571-ded5-40bc-a904-e3ed1dc7affb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.670810 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-config" (OuterVolumeSpecName: "config") pod "4a51c571-ded5-40bc-a904-e3ed1dc7affb" (UID: "4a51c571-ded5-40bc-a904-e3ed1dc7affb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.683203 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a51c571-ded5-40bc-a904-e3ed1dc7affb" (UID: "4a51c571-ded5-40bc-a904-e3ed1dc7affb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.706802 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a51c571-ded5-40bc-a904-e3ed1dc7affb" (UID: "4a51c571-ded5-40bc-a904-e3ed1dc7affb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.769561 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.771238 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.771281 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.771297 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a51c571-ded5-40bc-a904-e3ed1dc7affb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.833148 5034 generic.go:334] "Generic (PLEG): container finished" podID="4a51c571-ded5-40bc-a904-e3ed1dc7affb" containerID="a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7" exitCode=0 Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.833195 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" event={"ID":"4a51c571-ded5-40bc-a904-e3ed1dc7affb","Type":"ContainerDied","Data":"a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7"} Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.833225 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" event={"ID":"4a51c571-ded5-40bc-a904-e3ed1dc7affb","Type":"ContainerDied","Data":"53de7554bb4ad519bd4c0f43090c594711c28a2f17e80a1bf91ce05ffb76d708"} Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.833243 5034 scope.go:117] "RemoveContainer" containerID="a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.833382 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8689fb8b95-928kk" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.880222 5034 scope.go:117] "RemoveContainer" containerID="ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.885070 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8689fb8b95-928kk"] Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.895778 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8689fb8b95-928kk"] Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.903650 5034 scope.go:117] "RemoveContainer" containerID="a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7" Jan 05 23:39:11 crc kubenswrapper[5034]: E0105 23:39:11.904734 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7\": container with ID starting with a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7 not found: ID does not exist" containerID="a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.904804 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7"} err="failed to get container status \"a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7\": rpc error: code = NotFound desc = could not find container \"a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7\": container with ID starting with a4880bb12b89fe82cf2d7a884ca92ef7272247e55b8dd758023f754f62654df7 not found: ID does not exist" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.904850 5034 scope.go:117] "RemoveContainer" containerID="ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a" Jan 05 23:39:11 crc kubenswrapper[5034]: E0105 23:39:11.905344 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a\": container with ID starting with ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a not found: ID does not exist" containerID="ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.905427 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a"} err="failed to get container status \"ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a\": rpc error: code = NotFound desc = could not find container \"ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a\": container with ID starting with ac5b62e8de3878f3afd8fb1771d76469155b0c17c940fb95b3fb84104bd5f85a not found: ID does not exist" Jan 05 23:39:11 crc kubenswrapper[5034]: I0105 23:39:11.940843 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-694df96497-fdnmh"] Jan 05 23:39:11 crc kubenswrapper[5034]: W0105 23:39:11.942461 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62100b1d_d7f0_4e4c_ad84_f756873c21ca.slice/crio-738b5b2e3e096a5408379f8f36f4a65709821629cc5cc344a89720f1656387d6 WatchSource:0}: Error finding container 738b5b2e3e096a5408379f8f36f4a65709821629cc5cc344a89720f1656387d6: Status 404 returned error can't find the container with id 738b5b2e3e096a5408379f8f36f4a65709821629cc5cc344a89720f1656387d6 Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.139197 5034 scope.go:117] "RemoveContainer" containerID="8bf44c442112e405e57e853421b439210a2cb75050cc6fbfd420110c5736446e" Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.200665 5034 scope.go:117] "RemoveContainer" containerID="f01fc3757f912a171babb5367dc03dfbdb4ef64762c8126064c3783f35183408" Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.247021 5034 scope.go:117] "RemoveContainer" containerID="d982c4fc689dd58835e7c49bd3f7c97f28d00e62902f3166ebef16e2c6064fbe" Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.300296 5034 scope.go:117] "RemoveContainer" containerID="86b4a482be6ced31dbb0f1e645bdccf50fe0a4708e4a12404b9ed238a9e90f0f" Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.335219 5034 scope.go:117] "RemoveContainer" containerID="dccb0bee25d2718cedd5ce370849106c0293a3e55c45f5a1333324b543e8e571" Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.363098 5034 scope.go:117] "RemoveContainer" containerID="de5ae22f45551f2ab1cc85f4ccaac35775024dbf85d9c3e663b530330d4a21fc" Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.461713 5034 scope.go:117] "RemoveContainer" containerID="dbd11c0dd9721aa53c2c4d6b066a49c79479cd418e5fb16e44376b6e5c2ba099" Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.509956 5034 scope.go:117] "RemoveContainer" containerID="193838d452ccbf14ea1c3b42b9c9d02317a91e80f551ebaa5397b9de1fe78ab4" Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.552775 5034 scope.go:117] "RemoveContainer" containerID="b7f7dcd84c32b6c4ec88f53ca3f696f3c8d5058296f420178458415be61281a4" Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.584099 5034 scope.go:117] "RemoveContainer" containerID="ac4d50feb7e948f9e4538756e93eb7eeec4d816a870a410eacc61514b7dfdaa0" Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.873298 5034 generic.go:334] "Generic (PLEG): container finished" podID="62100b1d-d7f0-4e4c-ad84-f756873c21ca" containerID="c9944c99de32ed340d3692d8e15034445a8baa6752701729cc11991061ab0e04" exitCode=0 Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.873361 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694df96497-fdnmh" event={"ID":"62100b1d-d7f0-4e4c-ad84-f756873c21ca","Type":"ContainerDied","Data":"c9944c99de32ed340d3692d8e15034445a8baa6752701729cc11991061ab0e04"} Jan 05 23:39:12 crc kubenswrapper[5034]: I0105 23:39:12.873389 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694df96497-fdnmh" event={"ID":"62100b1d-d7f0-4e4c-ad84-f756873c21ca","Type":"ContainerStarted","Data":"738b5b2e3e096a5408379f8f36f4a65709821629cc5cc344a89720f1656387d6"} Jan 05 23:39:13 crc kubenswrapper[5034]: I0105 23:39:13.860278 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a51c571-ded5-40bc-a904-e3ed1dc7affb" path="/var/lib/kubelet/pods/4a51c571-ded5-40bc-a904-e3ed1dc7affb/volumes" Jan 05 23:39:13 crc kubenswrapper[5034]: I0105 23:39:13.919574 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694df96497-fdnmh" event={"ID":"62100b1d-d7f0-4e4c-ad84-f756873c21ca","Type":"ContainerStarted","Data":"ed6cc11e48b95affa6133622f787d4b9582afeb9950bd7eadfc29725c4515235"} Jan 05 23:39:13 crc kubenswrapper[5034]: I0105 23:39:13.920299 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:13 crc kubenswrapper[5034]: I0105 23:39:13.945715 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-694df96497-fdnmh" podStartSLOduration=3.945689731 podStartE2EDuration="3.945689731s" podCreationTimestamp="2026-01-05 23:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 23:39:13.938576058 +0000 UTC m=+6446.310575547" watchObservedRunningTime="2026-01-05 23:39:13.945689731 +0000 UTC m=+6446.317689170" Jan 05 23:39:16 crc kubenswrapper[5034]: I0105 23:39:16.839515 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:39:16 crc kubenswrapper[5034]: E0105 23:39:16.840257 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:39:21 crc kubenswrapper[5034]: I0105 23:39:21.324015 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-694df96497-fdnmh" Jan 05 23:39:21 crc kubenswrapper[5034]: I0105 23:39:21.397984 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68c4dd6795-kwkqm"] Jan 05 23:39:21 crc kubenswrapper[5034]: I0105 23:39:21.398630 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" podUID="2c2171b2-fe72-412a-b1e8-0d31b7861f20" containerName="dnsmasq-dns" containerID="cri-o://c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4" gracePeriod=10 Jan 05 23:39:21 crc kubenswrapper[5034]: I0105 23:39:21.918334 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.016557 5034 generic.go:334] "Generic (PLEG): container finished" podID="2c2171b2-fe72-412a-b1e8-0d31b7861f20" containerID="c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4" exitCode=0 Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.016640 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" event={"ID":"2c2171b2-fe72-412a-b1e8-0d31b7861f20","Type":"ContainerDied","Data":"c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4"} Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.017065 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" event={"ID":"2c2171b2-fe72-412a-b1e8-0d31b7861f20","Type":"ContainerDied","Data":"fa774715dd75eb4eb9d32705c74481257a20e37a05718e7e9f80ab8c52966661"} Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.017122 5034 scope.go:117] "RemoveContainer" containerID="c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.016654 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c4dd6795-kwkqm" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.047888 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-sb\") pod \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.048201 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxnkb\" (UniqueName: \"kubernetes.io/projected/2c2171b2-fe72-412a-b1e8-0d31b7861f20-kube-api-access-xxnkb\") pod \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.048306 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-nb\") pod \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.048330 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-openstack-cell1\") pod \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.048453 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-dns-svc\") pod \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.048582 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-config\") pod \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\" (UID: \"2c2171b2-fe72-412a-b1e8-0d31b7861f20\") " Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.055124 5034 scope.go:117] "RemoveContainer" containerID="91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.057736 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c2171b2-fe72-412a-b1e8-0d31b7861f20-kube-api-access-xxnkb" (OuterVolumeSpecName: "kube-api-access-xxnkb") pod "2c2171b2-fe72-412a-b1e8-0d31b7861f20" (UID: "2c2171b2-fe72-412a-b1e8-0d31b7861f20"). InnerVolumeSpecName "kube-api-access-xxnkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.113275 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-config" (OuterVolumeSpecName: "config") pod "2c2171b2-fe72-412a-b1e8-0d31b7861f20" (UID: "2c2171b2-fe72-412a-b1e8-0d31b7861f20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.118024 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "2c2171b2-fe72-412a-b1e8-0d31b7861f20" (UID: "2c2171b2-fe72-412a-b1e8-0d31b7861f20"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.121327 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c2171b2-fe72-412a-b1e8-0d31b7861f20" (UID: "2c2171b2-fe72-412a-b1e8-0d31b7861f20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.128454 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c2171b2-fe72-412a-b1e8-0d31b7861f20" (UID: "2c2171b2-fe72-412a-b1e8-0d31b7861f20"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.129223 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c2171b2-fe72-412a-b1e8-0d31b7861f20" (UID: "2c2171b2-fe72-412a-b1e8-0d31b7861f20"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.152463 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.152503 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxnkb\" (UniqueName: \"kubernetes.io/projected/2c2171b2-fe72-412a-b1e8-0d31b7861f20-kube-api-access-xxnkb\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.152517 5034 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.152526 5034 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.152537 5034 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.152546 5034 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2171b2-fe72-412a-b1e8-0d31b7861f20-config\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.189939 5034 scope.go:117] "RemoveContainer" containerID="c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4" Jan 05 23:39:22 crc kubenswrapper[5034]: E0105 23:39:22.190562 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4\": container with ID starting with c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4 not found: ID does not exist" containerID="c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.190609 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4"} err="failed to get container status \"c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4\": rpc error: code = NotFound desc = could not find container \"c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4\": container with ID starting with c90c5729388f3540d87ce244919d4b03500cf5f06ecf7551bacbd487aac5aeb4 not found: ID does not exist" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.190634 5034 scope.go:117] "RemoveContainer" containerID="91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9" Jan 05 23:39:22 crc kubenswrapper[5034]: E0105 23:39:22.190960 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9\": container with ID starting with 91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9 not found: ID does not exist" containerID="91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.190990 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9"} err="failed to get container status \"91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9\": rpc error: code = NotFound desc = could not find container \"91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9\": container with ID starting with 91ddfbddebef3fb4e4e4cff70f4eb54e98e43bfcda978e02b717f13358060db9 not found: ID does not exist" Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.370670 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68c4dd6795-kwkqm"] Jan 05 23:39:22 crc kubenswrapper[5034]: I0105 23:39:22.395127 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68c4dd6795-kwkqm"] Jan 05 23:39:23 crc kubenswrapper[5034]: I0105 23:39:23.851425 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c2171b2-fe72-412a-b1e8-0d31b7861f20" path="/var/lib/kubelet/pods/2c2171b2-fe72-412a-b1e8-0d31b7861f20/volumes" Jan 05 23:39:31 crc kubenswrapper[5034]: I0105 23:39:31.839265 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:39:31 crc kubenswrapper[5034]: E0105 23:39:31.840311 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.418353 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj"] Jan 05 23:39:32 crc kubenswrapper[5034]: E0105 23:39:32.419251 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2171b2-fe72-412a-b1e8-0d31b7861f20" containerName="dnsmasq-dns" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.419373 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2171b2-fe72-412a-b1e8-0d31b7861f20" containerName="dnsmasq-dns" Jan 05 23:39:32 crc kubenswrapper[5034]: E0105 23:39:32.419496 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2171b2-fe72-412a-b1e8-0d31b7861f20" containerName="init" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.419580 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2171b2-fe72-412a-b1e8-0d31b7861f20" containerName="init" Jan 05 23:39:32 crc kubenswrapper[5034]: E0105 23:39:32.419688 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a51c571-ded5-40bc-a904-e3ed1dc7affb" containerName="init" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.419770 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a51c571-ded5-40bc-a904-e3ed1dc7affb" containerName="init" Jan 05 23:39:32 crc kubenswrapper[5034]: E0105 23:39:32.419841 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a51c571-ded5-40bc-a904-e3ed1dc7affb" containerName="dnsmasq-dns" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.419921 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a51c571-ded5-40bc-a904-e3ed1dc7affb" containerName="dnsmasq-dns" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.420259 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2171b2-fe72-412a-b1e8-0d31b7861f20" containerName="dnsmasq-dns" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.420395 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a51c571-ded5-40bc-a904-e3ed1dc7affb" containerName="dnsmasq-dns" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.421874 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.424900 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.425345 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-zm9h2" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.425345 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.425348 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.428420 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj"] Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.529572 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzk4g\" (UniqueName: \"kubernetes.io/projected/c6c94074-1549-40f8-925b-64cbbc9a5a34-kube-api-access-lzk4g\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.529684 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.529817 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.529878 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.632378 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzk4g\" (UniqueName: \"kubernetes.io/projected/c6c94074-1549-40f8-925b-64cbbc9a5a34-kube-api-access-lzk4g\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.632431 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.632515 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.632564 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.638803 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.638880 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.639304 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.654570 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzk4g\" (UniqueName: \"kubernetes.io/projected/c6c94074-1549-40f8-925b-64cbbc9a5a34-kube-api-access-lzk4g\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:32 crc kubenswrapper[5034]: I0105 23:39:32.748560 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:33 crc kubenswrapper[5034]: I0105 23:39:33.375585 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj"] Jan 05 23:39:33 crc kubenswrapper[5034]: W0105 23:39:33.380702 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c94074_1549_40f8_925b_64cbbc9a5a34.slice/crio-ae5f42e601203e11f479c431f4883a339bd4c2cada87868331270c6bba073bce WatchSource:0}: Error finding container ae5f42e601203e11f479c431f4883a339bd4c2cada87868331270c6bba073bce: Status 404 returned error can't find the container with id ae5f42e601203e11f479c431f4883a339bd4c2cada87868331270c6bba073bce Jan 05 23:39:34 crc kubenswrapper[5034]: I0105 23:39:34.154974 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" event={"ID":"c6c94074-1549-40f8-925b-64cbbc9a5a34","Type":"ContainerStarted","Data":"ae5f42e601203e11f479c431f4883a339bd4c2cada87868331270c6bba073bce"} Jan 05 23:39:43 crc kubenswrapper[5034]: I0105 23:39:43.254840 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" event={"ID":"c6c94074-1549-40f8-925b-64cbbc9a5a34","Type":"ContainerStarted","Data":"6cb4771fed76535ba627ba8763c27491ae35c471a0f53585bc0ba764bd3fda87"} Jan 05 23:39:43 crc kubenswrapper[5034]: I0105 23:39:43.282268 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" podStartSLOduration=2.195627841 podStartE2EDuration="11.282241197s" podCreationTimestamp="2026-01-05 23:39:32 +0000 UTC" firstStartedPulling="2026-01-05 23:39:33.383670771 +0000 UTC m=+6465.755670210" lastFinishedPulling="2026-01-05 23:39:42.470284127 +0000 UTC m=+6474.842283566" observedRunningTime="2026-01-05 23:39:43.270526953 +0000 UTC m=+6475.642526392" watchObservedRunningTime="2026-01-05 23:39:43.282241197 +0000 UTC m=+6475.654240626" Jan 05 23:39:45 crc kubenswrapper[5034]: I0105 23:39:45.839563 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:39:45 crc kubenswrapper[5034]: E0105 23:39:45.840498 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:39:50 crc kubenswrapper[5034]: I0105 23:39:50.322632 5034 generic.go:334] "Generic (PLEG): container finished" podID="c6c94074-1549-40f8-925b-64cbbc9a5a34" containerID="6cb4771fed76535ba627ba8763c27491ae35c471a0f53585bc0ba764bd3fda87" exitCode=2 Jan 05 23:39:50 crc kubenswrapper[5034]: I0105 23:39:50.322725 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" event={"ID":"c6c94074-1549-40f8-925b-64cbbc9a5a34","Type":"ContainerDied","Data":"6cb4771fed76535ba627ba8763c27491ae35c471a0f53585bc0ba764bd3fda87"} Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.768091 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.855564 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-inventory\") pod \"c6c94074-1549-40f8-925b-64cbbc9a5a34\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.856057 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-ssh-key\") pod \"c6c94074-1549-40f8-925b-64cbbc9a5a34\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.856189 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-pre-adoption-validation-combined-ca-bundle\") pod \"c6c94074-1549-40f8-925b-64cbbc9a5a34\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.856355 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzk4g\" (UniqueName: \"kubernetes.io/projected/c6c94074-1549-40f8-925b-64cbbc9a5a34-kube-api-access-lzk4g\") pod \"c6c94074-1549-40f8-925b-64cbbc9a5a34\" (UID: \"c6c94074-1549-40f8-925b-64cbbc9a5a34\") " Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.861987 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c94074-1549-40f8-925b-64cbbc9a5a34-kube-api-access-lzk4g" (OuterVolumeSpecName: "kube-api-access-lzk4g") pod "c6c94074-1549-40f8-925b-64cbbc9a5a34" (UID: "c6c94074-1549-40f8-925b-64cbbc9a5a34"). InnerVolumeSpecName "kube-api-access-lzk4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.865436 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "c6c94074-1549-40f8-925b-64cbbc9a5a34" (UID: "c6c94074-1549-40f8-925b-64cbbc9a5a34"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.889431 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c6c94074-1549-40f8-925b-64cbbc9a5a34" (UID: "c6c94074-1549-40f8-925b-64cbbc9a5a34"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.889469 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-inventory" (OuterVolumeSpecName: "inventory") pod "c6c94074-1549-40f8-925b-64cbbc9a5a34" (UID: "c6c94074-1549-40f8-925b-64cbbc9a5a34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.959452 5034 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.959488 5034 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.959501 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzk4g\" (UniqueName: \"kubernetes.io/projected/c6c94074-1549-40f8-925b-64cbbc9a5a34-kube-api-access-lzk4g\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:51 crc kubenswrapper[5034]: I0105 23:39:51.959512 5034 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c94074-1549-40f8-925b-64cbbc9a5a34-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 23:39:52 crc kubenswrapper[5034]: I0105 23:39:52.342053 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" event={"ID":"c6c94074-1549-40f8-925b-64cbbc9a5a34","Type":"ContainerDied","Data":"ae5f42e601203e11f479c431f4883a339bd4c2cada87868331270c6bba073bce"} Jan 05 23:39:52 crc kubenswrapper[5034]: I0105 23:39:52.342122 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj" Jan 05 23:39:52 crc kubenswrapper[5034]: I0105 23:39:52.342126 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae5f42e601203e11f479c431f4883a339bd4c2cada87868331270c6bba073bce" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.036231 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r"] Jan 05 23:40:00 crc kubenswrapper[5034]: E0105 23:40:00.037525 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c94074-1549-40f8-925b-64cbbc9a5a34" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.037560 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c94074-1549-40f8-925b-64cbbc9a5a34" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.037911 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c94074-1549-40f8-925b-64cbbc9a5a34" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.038907 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.042964 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.043103 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.043196 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.043240 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-zm9h2" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.048558 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r"] Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.144476 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.144977 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.145167 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.145263 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtb7g\" (UniqueName: \"kubernetes.io/projected/0bfad148-5a3b-4f64-bfbc-eb42df610c76-kube-api-access-gtb7g\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.247516 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.247620 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtb7g\" (UniqueName: \"kubernetes.io/projected/0bfad148-5a3b-4f64-bfbc-eb42df610c76-kube-api-access-gtb7g\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.247766 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.247914 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.253924 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.254640 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.255169 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.264762 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtb7g\" (UniqueName: \"kubernetes.io/projected/0bfad148-5a3b-4f64-bfbc-eb42df610c76-kube-api-access-gtb7g\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.382745 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.839819 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:40:00 crc kubenswrapper[5034]: E0105 23:40:00.840517 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:40:00 crc kubenswrapper[5034]: I0105 23:40:00.992128 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r"] Jan 05 23:40:01 crc kubenswrapper[5034]: I0105 23:40:01.477659 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" event={"ID":"0bfad148-5a3b-4f64-bfbc-eb42df610c76","Type":"ContainerStarted","Data":"0fd03a89e907df729acd52b12b4273938480b011e6eec27d8c147a2139d0d029"} Jan 05 23:40:02 crc kubenswrapper[5034]: I0105 23:40:02.489904 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" event={"ID":"0bfad148-5a3b-4f64-bfbc-eb42df610c76","Type":"ContainerStarted","Data":"28263232b2aa3f4fcf268d8e460c17c5c8f74c4f2ec08d743684411a400d51f4"} Jan 05 23:40:02 crc kubenswrapper[5034]: I0105 23:40:02.510888 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" podStartSLOduration=1.8800352409999999 podStartE2EDuration="2.510861677s" podCreationTimestamp="2026-01-05 23:40:00 +0000 UTC" firstStartedPulling="2026-01-05 23:40:00.999338673 +0000 UTC m=+6493.371338112" lastFinishedPulling="2026-01-05 23:40:01.630165109 +0000 UTC m=+6494.002164548" observedRunningTime="2026-01-05 23:40:02.504205187 +0000 UTC m=+6494.876204636" watchObservedRunningTime="2026-01-05 23:40:02.510861677 +0000 UTC m=+6494.882861116" Jan 05 23:40:09 crc kubenswrapper[5034]: I0105 23:40:09.591163 5034 generic.go:334] "Generic (PLEG): container finished" podID="0bfad148-5a3b-4f64-bfbc-eb42df610c76" containerID="28263232b2aa3f4fcf268d8e460c17c5c8f74c4f2ec08d743684411a400d51f4" exitCode=2 Jan 05 23:40:09 crc kubenswrapper[5034]: I0105 23:40:09.591254 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" event={"ID":"0bfad148-5a3b-4f64-bfbc-eb42df610c76","Type":"ContainerDied","Data":"28263232b2aa3f4fcf268d8e460c17c5c8f74c4f2ec08d743684411a400d51f4"} Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.083382 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.238952 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtb7g\" (UniqueName: \"kubernetes.io/projected/0bfad148-5a3b-4f64-bfbc-eb42df610c76-kube-api-access-gtb7g\") pod \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.239038 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-inventory\") pod \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.239127 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-ssh-key\") pod \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.239161 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-pre-adoption-validation-combined-ca-bundle\") pod \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\" (UID: \"0bfad148-5a3b-4f64-bfbc-eb42df610c76\") " Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.244816 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bfad148-5a3b-4f64-bfbc-eb42df610c76-kube-api-access-gtb7g" (OuterVolumeSpecName: "kube-api-access-gtb7g") pod "0bfad148-5a3b-4f64-bfbc-eb42df610c76" (UID: "0bfad148-5a3b-4f64-bfbc-eb42df610c76"). InnerVolumeSpecName "kube-api-access-gtb7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.246709 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "0bfad148-5a3b-4f64-bfbc-eb42df610c76" (UID: "0bfad148-5a3b-4f64-bfbc-eb42df610c76"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.284152 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0bfad148-5a3b-4f64-bfbc-eb42df610c76" (UID: "0bfad148-5a3b-4f64-bfbc-eb42df610c76"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.284923 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-inventory" (OuterVolumeSpecName: "inventory") pod "0bfad148-5a3b-4f64-bfbc-eb42df610c76" (UID: "0bfad148-5a3b-4f64-bfbc-eb42df610c76"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.347176 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtb7g\" (UniqueName: \"kubernetes.io/projected/0bfad148-5a3b-4f64-bfbc-eb42df610c76-kube-api-access-gtb7g\") on node \"crc\" DevicePath \"\"" Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.347216 5034 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.347227 5034 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.347238 5034 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfad148-5a3b-4f64-bfbc-eb42df610c76-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.609092 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" event={"ID":"0bfad148-5a3b-4f64-bfbc-eb42df610c76","Type":"ContainerDied","Data":"0fd03a89e907df729acd52b12b4273938480b011e6eec27d8c147a2139d0d029"} Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.609392 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fd03a89e907df729acd52b12b4273938480b011e6eec27d8c147a2139d0d029" Jan 05 23:40:11 crc kubenswrapper[5034]: I0105 23:40:11.609133 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r" Jan 05 23:40:13 crc kubenswrapper[5034]: I0105 23:40:13.061218 5034 scope.go:117] "RemoveContainer" containerID="f0524a9770bf3e5791e8fe4c07737aeea0773d857d16fda09849f22e19c9bc7e" Jan 05 23:40:13 crc kubenswrapper[5034]: I0105 23:40:13.083383 5034 scope.go:117] "RemoveContainer" containerID="3c9a31fdd84df109c291954a36aec019e049367ccb73bb8f644f1ed677b5f7b8" Jan 05 23:40:13 crc kubenswrapper[5034]: I0105 23:40:13.261783 5034 scope.go:117] "RemoveContainer" containerID="84b88905f30f9f29c945ad6b42f6bcc84a04487da3a146821bebde81496910dd" Jan 05 23:40:13 crc kubenswrapper[5034]: I0105 23:40:13.445675 5034 scope.go:117] "RemoveContainer" containerID="30aff88540cdc3a8d2c5815292118b034cfac20451cdd29999b44af862a3c3e7" Jan 05 23:40:13 crc kubenswrapper[5034]: I0105 23:40:13.472236 5034 scope.go:117] "RemoveContainer" containerID="638c5ff766acd63968bfe7c51957902177e214312f745bb76a28552afff017af" Jan 05 23:40:13 crc kubenswrapper[5034]: I0105 23:40:13.507618 5034 scope.go:117] "RemoveContainer" containerID="afb1ccf19a436dff6531f5c50da289ec669db82f0b2dc659dcfdcc987654717d" Jan 05 23:40:13 crc kubenswrapper[5034]: I0105 23:40:13.839386 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:40:13 crc kubenswrapper[5034]: E0105 23:40:13.839727 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:40:26 crc kubenswrapper[5034]: I0105 23:40:26.839552 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:40:26 crc kubenswrapper[5034]: E0105 23:40:26.840596 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:40:40 crc kubenswrapper[5034]: I0105 23:40:40.839804 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:40:40 crc kubenswrapper[5034]: E0105 23:40:40.840905 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:40:44 crc kubenswrapper[5034]: I0105 23:40:44.045339 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-nnn7q"] Jan 05 23:40:44 crc kubenswrapper[5034]: I0105 23:40:44.059639 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-nnn7q"] Jan 05 23:40:45 crc kubenswrapper[5034]: I0105 23:40:45.850585 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc1caa25-b7ed-49cc-9adf-41a81873b4ea" path="/var/lib/kubelet/pods/cc1caa25-b7ed-49cc-9adf-41a81873b4ea/volumes" Jan 05 23:40:46 crc kubenswrapper[5034]: I0105 23:40:46.044390 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-ec43-account-create-update-j7d9r"] Jan 05 23:40:46 crc kubenswrapper[5034]: I0105 23:40:46.058603 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-ec43-account-create-update-j7d9r"] Jan 05 23:40:47 crc kubenswrapper[5034]: I0105 23:40:47.870172 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="577a48c3-b291-4924-a70b-ce1ad07ea2b7" path="/var/lib/kubelet/pods/577a48c3-b291-4924-a70b-ce1ad07ea2b7/volumes" Jan 05 23:40:51 crc kubenswrapper[5034]: I0105 23:40:51.037832 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-sczmb"] Jan 05 23:40:51 crc kubenswrapper[5034]: I0105 23:40:51.049793 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-sczmb"] Jan 05 23:40:51 crc kubenswrapper[5034]: I0105 23:40:51.851907 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365e662d-e16c-4f51-af06-055f14000dc6" path="/var/lib/kubelet/pods/365e662d-e16c-4f51-af06-055f14000dc6/volumes" Jan 05 23:40:52 crc kubenswrapper[5034]: I0105 23:40:52.055664 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-ab47-account-create-update-mcb49"] Jan 05 23:40:52 crc kubenswrapper[5034]: I0105 23:40:52.070542 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-ab47-account-create-update-mcb49"] Jan 05 23:40:52 crc kubenswrapper[5034]: I0105 23:40:52.842846 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:40:52 crc kubenswrapper[5034]: E0105 23:40:52.844207 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:40:53 crc kubenswrapper[5034]: I0105 23:40:53.854260 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383eea24-e3ab-47ab-9a30-196673dd0ccd" path="/var/lib/kubelet/pods/383eea24-e3ab-47ab-9a30-196673dd0ccd/volumes" Jan 05 23:41:07 crc kubenswrapper[5034]: I0105 23:41:07.844789 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:41:07 crc kubenswrapper[5034]: E0105 23:41:07.845672 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.573570 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5pvlv/must-gather-tqvqv"] Jan 05 23:41:11 crc kubenswrapper[5034]: E0105 23:41:11.574713 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfad148-5a3b-4f64-bfbc-eb42df610c76" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.574733 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfad148-5a3b-4f64-bfbc-eb42df610c76" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.575046 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bfad148-5a3b-4f64-bfbc-eb42df610c76" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.577411 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/must-gather-tqvqv" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.582490 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5pvlv"/"kube-root-ca.crt" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.582665 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5pvlv"/"openshift-service-ca.crt" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.586767 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5pvlv/must-gather-tqvqv"] Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.587870 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5pvlv"/"default-dockercfg-4ql2j" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.612859 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wr9r\" (UniqueName: \"kubernetes.io/projected/51f20dee-93fc-4732-a939-b64019e28734-kube-api-access-7wr9r\") pod \"must-gather-tqvqv\" (UID: \"51f20dee-93fc-4732-a939-b64019e28734\") " pod="openshift-must-gather-5pvlv/must-gather-tqvqv" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.613019 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51f20dee-93fc-4732-a939-b64019e28734-must-gather-output\") pod \"must-gather-tqvqv\" (UID: \"51f20dee-93fc-4732-a939-b64019e28734\") " pod="openshift-must-gather-5pvlv/must-gather-tqvqv" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.715327 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wr9r\" (UniqueName: \"kubernetes.io/projected/51f20dee-93fc-4732-a939-b64019e28734-kube-api-access-7wr9r\") pod \"must-gather-tqvqv\" (UID: \"51f20dee-93fc-4732-a939-b64019e28734\") " pod="openshift-must-gather-5pvlv/must-gather-tqvqv" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.715523 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51f20dee-93fc-4732-a939-b64019e28734-must-gather-output\") pod \"must-gather-tqvqv\" (UID: \"51f20dee-93fc-4732-a939-b64019e28734\") " pod="openshift-must-gather-5pvlv/must-gather-tqvqv" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.716141 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51f20dee-93fc-4732-a939-b64019e28734-must-gather-output\") pod \"must-gather-tqvqv\" (UID: \"51f20dee-93fc-4732-a939-b64019e28734\") " pod="openshift-must-gather-5pvlv/must-gather-tqvqv" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.736205 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wr9r\" (UniqueName: \"kubernetes.io/projected/51f20dee-93fc-4732-a939-b64019e28734-kube-api-access-7wr9r\") pod \"must-gather-tqvqv\" (UID: \"51f20dee-93fc-4732-a939-b64019e28734\") " pod="openshift-must-gather-5pvlv/must-gather-tqvqv" Jan 05 23:41:11 crc kubenswrapper[5034]: I0105 23:41:11.904376 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/must-gather-tqvqv" Jan 05 23:41:12 crc kubenswrapper[5034]: I0105 23:41:12.384328 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5pvlv/must-gather-tqvqv"] Jan 05 23:41:13 crc kubenswrapper[5034]: I0105 23:41:13.346738 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5pvlv/must-gather-tqvqv" event={"ID":"51f20dee-93fc-4732-a939-b64019e28734","Type":"ContainerStarted","Data":"c817c8ce608e62d55686de337732224612f34fd9a4dcfdfcad5ecd4798ee8122"} Jan 05 23:41:13 crc kubenswrapper[5034]: I0105 23:41:13.762210 5034 scope.go:117] "RemoveContainer" containerID="6954754df6166fc84af6bae312741ca24fc9f16d31798cee2e7ba046d7a5c7dc" Jan 05 23:41:13 crc kubenswrapper[5034]: I0105 23:41:13.808419 5034 scope.go:117] "RemoveContainer" containerID="ce0fb3d9058817fa6f0b14f17fa480241bb4fbf31c1ecef233a010ee2feb2392" Jan 05 23:41:13 crc kubenswrapper[5034]: I0105 23:41:13.852973 5034 scope.go:117] "RemoveContainer" containerID="1a12659db1b9adca314a1d686372690b17a2b3086f12aeb0ca31ddb9f60af19d" Jan 05 23:41:13 crc kubenswrapper[5034]: I0105 23:41:13.895053 5034 scope.go:117] "RemoveContainer" containerID="b942d7d6bd64d44eca46ef1276f7c0a285907164d6f1bde6b20b99981aced54c" Jan 05 23:41:21 crc kubenswrapper[5034]: I0105 23:41:21.454158 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5pvlv/must-gather-tqvqv" event={"ID":"51f20dee-93fc-4732-a939-b64019e28734","Type":"ContainerStarted","Data":"39b861ec63e08de25cc2a99e9ccac04ac3ec6e4eabe1686009f225be408c52ff"} Jan 05 23:41:21 crc kubenswrapper[5034]: I0105 23:41:21.454691 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5pvlv/must-gather-tqvqv" event={"ID":"51f20dee-93fc-4732-a939-b64019e28734","Type":"ContainerStarted","Data":"35d5ec16103f17584659229f3a1346461bfc671a14431c343ae33a0e20767c80"} Jan 05 23:41:21 crc kubenswrapper[5034]: I0105 23:41:21.473296 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5pvlv/must-gather-tqvqv" podStartSLOduration=2.602543331 podStartE2EDuration="10.473276491s" podCreationTimestamp="2026-01-05 23:41:11 +0000 UTC" firstStartedPulling="2026-01-05 23:41:12.396336412 +0000 UTC m=+6564.768335851" lastFinishedPulling="2026-01-05 23:41:20.267069572 +0000 UTC m=+6572.639069011" observedRunningTime="2026-01-05 23:41:21.470156272 +0000 UTC m=+6573.842155711" watchObservedRunningTime="2026-01-05 23:41:21.473276491 +0000 UTC m=+6573.845275920" Jan 05 23:41:21 crc kubenswrapper[5034]: I0105 23:41:21.840167 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:41:21 crc kubenswrapper[5034]: E0105 23:41:21.840454 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:41:24 crc kubenswrapper[5034]: I0105 23:41:24.729268 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5pvlv/crc-debug-n2gnn"] Jan 05 23:41:24 crc kubenswrapper[5034]: I0105 23:41:24.731670 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" Jan 05 23:41:24 crc kubenswrapper[5034]: I0105 23:41:24.861780 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xthr2\" (UniqueName: \"kubernetes.io/projected/f6d38e14-457a-43f0-ad43-1b1ac834c448-kube-api-access-xthr2\") pod \"crc-debug-n2gnn\" (UID: \"f6d38e14-457a-43f0-ad43-1b1ac834c448\") " pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" Jan 05 23:41:24 crc kubenswrapper[5034]: I0105 23:41:24.862198 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6d38e14-457a-43f0-ad43-1b1ac834c448-host\") pod \"crc-debug-n2gnn\" (UID: \"f6d38e14-457a-43f0-ad43-1b1ac834c448\") " pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" Jan 05 23:41:24 crc kubenswrapper[5034]: I0105 23:41:24.964052 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xthr2\" (UniqueName: \"kubernetes.io/projected/f6d38e14-457a-43f0-ad43-1b1ac834c448-kube-api-access-xthr2\") pod \"crc-debug-n2gnn\" (UID: \"f6d38e14-457a-43f0-ad43-1b1ac834c448\") " pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" Jan 05 23:41:24 crc kubenswrapper[5034]: I0105 23:41:24.964217 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6d38e14-457a-43f0-ad43-1b1ac834c448-host\") pod \"crc-debug-n2gnn\" (UID: \"f6d38e14-457a-43f0-ad43-1b1ac834c448\") " pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" Jan 05 23:41:24 crc kubenswrapper[5034]: I0105 23:41:24.964476 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6d38e14-457a-43f0-ad43-1b1ac834c448-host\") pod \"crc-debug-n2gnn\" (UID: \"f6d38e14-457a-43f0-ad43-1b1ac834c448\") " pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" Jan 05 23:41:24 crc kubenswrapper[5034]: I0105 23:41:24.983785 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xthr2\" (UniqueName: \"kubernetes.io/projected/f6d38e14-457a-43f0-ad43-1b1ac834c448-kube-api-access-xthr2\") pod \"crc-debug-n2gnn\" (UID: \"f6d38e14-457a-43f0-ad43-1b1ac834c448\") " pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" Jan 05 23:41:25 crc kubenswrapper[5034]: I0105 23:41:25.050626 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" Jan 05 23:41:25 crc kubenswrapper[5034]: I0105 23:41:25.496619 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" event={"ID":"f6d38e14-457a-43f0-ad43-1b1ac834c448","Type":"ContainerStarted","Data":"77152fe3e0bc9d2a693dc30e5872c281b90d9019c9de6e97a1737d1ab9bb5ff4"} Jan 05 23:41:33 crc kubenswrapper[5034]: I0105 23:41:33.838633 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:41:33 crc kubenswrapper[5034]: E0105 23:41:33.839542 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:41:38 crc kubenswrapper[5034]: I0105 23:41:38.640587 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" event={"ID":"f6d38e14-457a-43f0-ad43-1b1ac834c448","Type":"ContainerStarted","Data":"e9a9e8e585ac8f4c8dbd8822d7ffdc33e234297ba9f5341e2209d955ce5f8e1e"} Jan 05 23:41:38 crc kubenswrapper[5034]: I0105 23:41:38.659185 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" podStartSLOduration=1.926981297 podStartE2EDuration="14.659156121s" podCreationTimestamp="2026-01-05 23:41:24 +0000 UTC" firstStartedPulling="2026-01-05 23:41:25.129297856 +0000 UTC m=+6577.501297295" lastFinishedPulling="2026-01-05 23:41:37.86147268 +0000 UTC m=+6590.233472119" observedRunningTime="2026-01-05 23:41:38.652908553 +0000 UTC m=+6591.024908002" watchObservedRunningTime="2026-01-05 23:41:38.659156121 +0000 UTC m=+6591.031155560" Jan 05 23:41:44 crc kubenswrapper[5034]: I0105 23:41:44.838900 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:41:44 crc kubenswrapper[5034]: E0105 23:41:44.839952 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:41:46 crc kubenswrapper[5034]: I0105 23:41:46.048466 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-xpclr"] Jan 05 23:41:46 crc kubenswrapper[5034]: I0105 23:41:46.058810 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-xpclr"] Jan 05 23:41:47 crc kubenswrapper[5034]: I0105 23:41:47.919879 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df5858b-127b-4f28-8ce3-b0dc938ddaae" path="/var/lib/kubelet/pods/5df5858b-127b-4f28-8ce3-b0dc938ddaae/volumes" Jan 05 23:41:52 crc kubenswrapper[5034]: I0105 23:41:52.768655 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" event={"ID":"f6d38e14-457a-43f0-ad43-1b1ac834c448","Type":"ContainerDied","Data":"e9a9e8e585ac8f4c8dbd8822d7ffdc33e234297ba9f5341e2209d955ce5f8e1e"} Jan 05 23:41:52 crc kubenswrapper[5034]: I0105 23:41:52.768597 5034 generic.go:334] "Generic (PLEG): container finished" podID="f6d38e14-457a-43f0-ad43-1b1ac834c448" containerID="e9a9e8e585ac8f4c8dbd8822d7ffdc33e234297ba9f5341e2209d955ce5f8e1e" exitCode=0 Jan 05 23:41:53 crc kubenswrapper[5034]: I0105 23:41:53.909220 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" Jan 05 23:41:53 crc kubenswrapper[5034]: I0105 23:41:53.948588 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5pvlv/crc-debug-n2gnn"] Jan 05 23:41:53 crc kubenswrapper[5034]: I0105 23:41:53.959894 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5pvlv/crc-debug-n2gnn"] Jan 05 23:41:53 crc kubenswrapper[5034]: I0105 23:41:53.997958 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xthr2\" (UniqueName: \"kubernetes.io/projected/f6d38e14-457a-43f0-ad43-1b1ac834c448-kube-api-access-xthr2\") pod \"f6d38e14-457a-43f0-ad43-1b1ac834c448\" (UID: \"f6d38e14-457a-43f0-ad43-1b1ac834c448\") " Jan 05 23:41:53 crc kubenswrapper[5034]: I0105 23:41:53.998096 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6d38e14-457a-43f0-ad43-1b1ac834c448-host\") pod \"f6d38e14-457a-43f0-ad43-1b1ac834c448\" (UID: \"f6d38e14-457a-43f0-ad43-1b1ac834c448\") " Jan 05 23:41:53 crc kubenswrapper[5034]: I0105 23:41:53.998487 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6d38e14-457a-43f0-ad43-1b1ac834c448-host" (OuterVolumeSpecName: "host") pod "f6d38e14-457a-43f0-ad43-1b1ac834c448" (UID: "f6d38e14-457a-43f0-ad43-1b1ac834c448"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:41:53 crc kubenswrapper[5034]: I0105 23:41:53.999326 5034 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6d38e14-457a-43f0-ad43-1b1ac834c448-host\") on node \"crc\" DevicePath \"\"" Jan 05 23:41:54 crc kubenswrapper[5034]: I0105 23:41:54.012547 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6d38e14-457a-43f0-ad43-1b1ac834c448-kube-api-access-xthr2" (OuterVolumeSpecName: "kube-api-access-xthr2") pod "f6d38e14-457a-43f0-ad43-1b1ac834c448" (UID: "f6d38e14-457a-43f0-ad43-1b1ac834c448"). InnerVolumeSpecName "kube-api-access-xthr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:41:54 crc kubenswrapper[5034]: I0105 23:41:54.109285 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xthr2\" (UniqueName: \"kubernetes.io/projected/f6d38e14-457a-43f0-ad43-1b1ac834c448-kube-api-access-xthr2\") on node \"crc\" DevicePath \"\"" Jan 05 23:41:54 crc kubenswrapper[5034]: I0105 23:41:54.789967 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77152fe3e0bc9d2a693dc30e5872c281b90d9019c9de6e97a1737d1ab9bb5ff4" Jan 05 23:41:54 crc kubenswrapper[5034]: I0105 23:41:54.790130 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/crc-debug-n2gnn" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.120379 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5pvlv/crc-debug-xf856"] Jan 05 23:41:55 crc kubenswrapper[5034]: E0105 23:41:55.125136 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d38e14-457a-43f0-ad43-1b1ac834c448" containerName="container-00" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.125263 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d38e14-457a-43f0-ad43-1b1ac834c448" containerName="container-00" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.125655 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6d38e14-457a-43f0-ad43-1b1ac834c448" containerName="container-00" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.126825 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/crc-debug-xf856" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.236526 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2zv9\" (UniqueName: \"kubernetes.io/projected/cc8e8b1a-edb4-4c23-b879-a35caee1570a-kube-api-access-k2zv9\") pod \"crc-debug-xf856\" (UID: \"cc8e8b1a-edb4-4c23-b879-a35caee1570a\") " pod="openshift-must-gather-5pvlv/crc-debug-xf856" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.237045 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc8e8b1a-edb4-4c23-b879-a35caee1570a-host\") pod \"crc-debug-xf856\" (UID: \"cc8e8b1a-edb4-4c23-b879-a35caee1570a\") " pod="openshift-must-gather-5pvlv/crc-debug-xf856" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.338863 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2zv9\" (UniqueName: \"kubernetes.io/projected/cc8e8b1a-edb4-4c23-b879-a35caee1570a-kube-api-access-k2zv9\") pod \"crc-debug-xf856\" (UID: \"cc8e8b1a-edb4-4c23-b879-a35caee1570a\") " pod="openshift-must-gather-5pvlv/crc-debug-xf856" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.339353 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc8e8b1a-edb4-4c23-b879-a35caee1570a-host\") pod \"crc-debug-xf856\" (UID: \"cc8e8b1a-edb4-4c23-b879-a35caee1570a\") " pod="openshift-must-gather-5pvlv/crc-debug-xf856" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.339490 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc8e8b1a-edb4-4c23-b879-a35caee1570a-host\") pod \"crc-debug-xf856\" (UID: \"cc8e8b1a-edb4-4c23-b879-a35caee1570a\") " pod="openshift-must-gather-5pvlv/crc-debug-xf856" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.357484 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2zv9\" (UniqueName: \"kubernetes.io/projected/cc8e8b1a-edb4-4c23-b879-a35caee1570a-kube-api-access-k2zv9\") pod \"crc-debug-xf856\" (UID: \"cc8e8b1a-edb4-4c23-b879-a35caee1570a\") " pod="openshift-must-gather-5pvlv/crc-debug-xf856" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.447725 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/crc-debug-xf856" Jan 05 23:41:55 crc kubenswrapper[5034]: W0105 23:41:55.491637 5034 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc8e8b1a_edb4_4c23_b879_a35caee1570a.slice/crio-d4d1df55cd1800ec75564affe6e0790fa61ac85571c5456085812736fd8fdeba WatchSource:0}: Error finding container d4d1df55cd1800ec75564affe6e0790fa61ac85571c5456085812736fd8fdeba: Status 404 returned error can't find the container with id d4d1df55cd1800ec75564affe6e0790fa61ac85571c5456085812736fd8fdeba Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.801855 5034 generic.go:334] "Generic (PLEG): container finished" podID="cc8e8b1a-edb4-4c23-b879-a35caee1570a" containerID="3bd66723a4b603a86e035531aede708747bab3bcac3902b1c5ba5d0280dd1005" exitCode=1 Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.801971 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5pvlv/crc-debug-xf856" event={"ID":"cc8e8b1a-edb4-4c23-b879-a35caee1570a","Type":"ContainerDied","Data":"3bd66723a4b603a86e035531aede708747bab3bcac3902b1c5ba5d0280dd1005"} Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.802182 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5pvlv/crc-debug-xf856" event={"ID":"cc8e8b1a-edb4-4c23-b879-a35caee1570a","Type":"ContainerStarted","Data":"d4d1df55cd1800ec75564affe6e0790fa61ac85571c5456085812736fd8fdeba"} Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.839357 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:41:55 crc kubenswrapper[5034]: E0105 23:41:55.839859 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.854302 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6d38e14-457a-43f0-ad43-1b1ac834c448" path="/var/lib/kubelet/pods/f6d38e14-457a-43f0-ad43-1b1ac834c448/volumes" Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.855166 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5pvlv/crc-debug-xf856"] Jan 05 23:41:55 crc kubenswrapper[5034]: I0105 23:41:55.864327 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5pvlv/crc-debug-xf856"] Jan 05 23:41:56 crc kubenswrapper[5034]: I0105 23:41:56.940737 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/crc-debug-xf856" Jan 05 23:41:56 crc kubenswrapper[5034]: I0105 23:41:56.976143 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc8e8b1a-edb4-4c23-b879-a35caee1570a-host\") pod \"cc8e8b1a-edb4-4c23-b879-a35caee1570a\" (UID: \"cc8e8b1a-edb4-4c23-b879-a35caee1570a\") " Jan 05 23:41:56 crc kubenswrapper[5034]: I0105 23:41:56.976258 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc8e8b1a-edb4-4c23-b879-a35caee1570a-host" (OuterVolumeSpecName: "host") pod "cc8e8b1a-edb4-4c23-b879-a35caee1570a" (UID: "cc8e8b1a-edb4-4c23-b879-a35caee1570a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 23:41:56 crc kubenswrapper[5034]: I0105 23:41:56.976270 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2zv9\" (UniqueName: \"kubernetes.io/projected/cc8e8b1a-edb4-4c23-b879-a35caee1570a-kube-api-access-k2zv9\") pod \"cc8e8b1a-edb4-4c23-b879-a35caee1570a\" (UID: \"cc8e8b1a-edb4-4c23-b879-a35caee1570a\") " Jan 05 23:41:56 crc kubenswrapper[5034]: I0105 23:41:56.976950 5034 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc8e8b1a-edb4-4c23-b879-a35caee1570a-host\") on node \"crc\" DevicePath \"\"" Jan 05 23:41:56 crc kubenswrapper[5034]: I0105 23:41:56.982486 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8e8b1a-edb4-4c23-b879-a35caee1570a-kube-api-access-k2zv9" (OuterVolumeSpecName: "kube-api-access-k2zv9") pod "cc8e8b1a-edb4-4c23-b879-a35caee1570a" (UID: "cc8e8b1a-edb4-4c23-b879-a35caee1570a"). InnerVolumeSpecName "kube-api-access-k2zv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:41:57 crc kubenswrapper[5034]: I0105 23:41:57.081705 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2zv9\" (UniqueName: \"kubernetes.io/projected/cc8e8b1a-edb4-4c23-b879-a35caee1570a-kube-api-access-k2zv9\") on node \"crc\" DevicePath \"\"" Jan 05 23:41:57 crc kubenswrapper[5034]: I0105 23:41:57.826825 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d1df55cd1800ec75564affe6e0790fa61ac85571c5456085812736fd8fdeba" Jan 05 23:41:57 crc kubenswrapper[5034]: I0105 23:41:57.826861 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/crc-debug-xf856" Jan 05 23:41:57 crc kubenswrapper[5034]: I0105 23:41:57.854874 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc8e8b1a-edb4-4c23-b879-a35caee1570a" path="/var/lib/kubelet/pods/cc8e8b1a-edb4-4c23-b879-a35caee1570a/volumes" Jan 05 23:42:08 crc kubenswrapper[5034]: I0105 23:42:08.839043 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:42:08 crc kubenswrapper[5034]: E0105 23:42:08.839804 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:42:14 crc kubenswrapper[5034]: I0105 23:42:14.067604 5034 scope.go:117] "RemoveContainer" containerID="8232c0ad270594f826771803f4dcc4867043171c9d9bc5ede80c9b5c70174dce" Jan 05 23:42:14 crc kubenswrapper[5034]: I0105 23:42:14.114806 5034 scope.go:117] "RemoveContainer" containerID="0db83b64e78f4d4954ba4f6bf548977c93dec6604e6f6187c80f7c4c4bf0061a" Jan 05 23:42:20 crc kubenswrapper[5034]: I0105 23:42:20.839945 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:42:20 crc kubenswrapper[5034]: E0105 23:42:20.840960 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:42:35 crc kubenswrapper[5034]: I0105 23:42:35.838515 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:42:36 crc kubenswrapper[5034]: I0105 23:42:36.220614 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"caec8cb16ece25b9abbce837443558fede133cd330a8e77c74b35a5b29449ad5"} Jan 05 23:42:51 crc kubenswrapper[5034]: I0105 23:42:51.784538 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_32d9b801-27e9-4825-a130-6531d245e769/init-config-reloader/0.log" Jan 05 23:42:51 crc kubenswrapper[5034]: I0105 23:42:51.960517 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_32d9b801-27e9-4825-a130-6531d245e769/init-config-reloader/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.004956 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_32d9b801-27e9-4825-a130-6531d245e769/alertmanager/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.030010 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_32d9b801-27e9-4825-a130-6531d245e769/config-reloader/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.183987 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_00196117-ebbd-4c00-92d7-c2660c0c5b78/aodh-api/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.265507 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_00196117-ebbd-4c00-92d7-c2660c0c5b78/aodh-evaluator/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.393352 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_00196117-ebbd-4c00-92d7-c2660c0c5b78/aodh-notifier/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.415484 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_00196117-ebbd-4c00-92d7-c2660c0c5b78/aodh-listener/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.481535 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-9a4c-account-create-update-89d7j_6b6254af-35d4-4259-869d-194ae72e9c8a/mariadb-account-create-update/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.605595 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-db-create-vmrvq_c1bcc6da-3adc-495c-b32c-c328ecd78165/mariadb-database-create/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.731406 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-db-sync-8d82c_90b334f9-01bc-4ec5-a98e-a65e946939c9/aodh-db-sync/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.884172 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76d8878c56-6h7md_b405fa3f-251f-4a35-a756-ec7791a18148/barbican-api/0.log" Jan 05 23:42:52 crc kubenswrapper[5034]: I0105 23:42:52.953287 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76d8878c56-6h7md_b405fa3f-251f-4a35-a756-ec7791a18148/barbican-api-log/0.log" Jan 05 23:42:53 crc kubenswrapper[5034]: I0105 23:42:53.339908 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-747547f55d-5jgl7_ceaa20cf-58aa-4c8b-ae30-96e31512e48a/barbican-keystone-listener/0.log" Jan 05 23:42:53 crc kubenswrapper[5034]: I0105 23:42:53.441188 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-747547f55d-5jgl7_ceaa20cf-58aa-4c8b-ae30-96e31512e48a/barbican-keystone-listener-log/0.log" Jan 05 23:42:53 crc kubenswrapper[5034]: I0105 23:42:53.462622 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86ddb9889-nhjck_0586e65e-8f8f-40c4-be35-2753b10187e3/barbican-worker/0.log" Jan 05 23:42:53 crc kubenswrapper[5034]: I0105 23:42:53.552935 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86ddb9889-nhjck_0586e65e-8f8f-40c4-be35-2753b10187e3/barbican-worker-log/0.log" Jan 05 23:42:53 crc kubenswrapper[5034]: I0105 23:42:53.650490 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2f0499a8-aa42-4255-b00a-02a54e530c2c/ceilometer-central-agent/0.log" Jan 05 23:42:53 crc kubenswrapper[5034]: I0105 23:42:53.671609 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2f0499a8-aa42-4255-b00a-02a54e530c2c/ceilometer-notification-agent/0.log" Jan 05 23:42:53 crc kubenswrapper[5034]: I0105 23:42:53.770184 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2f0499a8-aa42-4255-b00a-02a54e530c2c/proxy-httpd/0.log" Jan 05 23:42:53 crc kubenswrapper[5034]: I0105 23:42:53.893183 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2f0499a8-aa42-4255-b00a-02a54e530c2c/sg-core/0.log" Jan 05 23:42:53 crc kubenswrapper[5034]: I0105 23:42:53.982991 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_01b11e57-4451-4a92-8fef-35e0026b1fad/cinder-api/0.log" Jan 05 23:42:54 crc kubenswrapper[5034]: I0105 23:42:54.030860 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_01b11e57-4451-4a92-8fef-35e0026b1fad/cinder-api-log/0.log" Jan 05 23:42:54 crc kubenswrapper[5034]: I0105 23:42:54.185943 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_080915f2-140d-4a59-9621-2677bb674ed6/cinder-scheduler/0.log" Jan 05 23:42:54 crc kubenswrapper[5034]: I0105 23:42:54.254374 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_080915f2-140d-4a59-9621-2677bb674ed6/probe/0.log" Jan 05 23:42:54 crc kubenswrapper[5034]: I0105 23:42:54.385147 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-694df96497-fdnmh_62100b1d-d7f0-4e4c-ad84-f756873c21ca/init/0.log" Jan 05 23:42:54 crc kubenswrapper[5034]: I0105 23:42:54.657955 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-694df96497-fdnmh_62100b1d-d7f0-4e4c-ad84-f756873c21ca/init/0.log" Jan 05 23:42:54 crc kubenswrapper[5034]: I0105 23:42:54.703560 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-694df96497-fdnmh_62100b1d-d7f0-4e4c-ad84-f756873c21ca/dnsmasq-dns/0.log" Jan 05 23:42:54 crc kubenswrapper[5034]: I0105 23:42:54.722925 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4fd7feb8-6291-4928-bf9b-534253512819/glance-httpd/0.log" Jan 05 23:42:54 crc kubenswrapper[5034]: I0105 23:42:54.874110 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4fd7feb8-6291-4928-bf9b-534253512819/glance-log/0.log" Jan 05 23:42:54 crc kubenswrapper[5034]: I0105 23:42:54.958676 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5cee0d46-4ed0-4fc9-9f55-f035ef40fecc/glance-httpd/0.log" Jan 05 23:42:54 crc kubenswrapper[5034]: I0105 23:42:54.963277 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5cee0d46-4ed0-4fc9-9f55-f035ef40fecc/glance-log/0.log" Jan 05 23:42:55 crc kubenswrapper[5034]: I0105 23:42:55.196829 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7f9ccf9bf6-cd8v7_90de7a12-45fa-4f42-a94b-528884fd2afb/heat-api/0.log" Jan 05 23:42:55 crc kubenswrapper[5034]: I0105 23:42:55.342415 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-65bf95755f-d74sv_45da723d-b34f-40ef-8938-abf9f614c451/heat-cfnapi/0.log" Jan 05 23:42:55 crc kubenswrapper[5034]: I0105 23:42:55.474412 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-d63d-account-create-update-ddp5l_1b977528-7379-4e3d-b770-31df686e4fdc/mariadb-account-create-update/0.log" Jan 05 23:42:55 crc kubenswrapper[5034]: I0105 23:42:55.566723 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-db-create-5zqpd_591b10b9-dfcd-46d4-818f-dcfb0fea7ef4/mariadb-database-create/0.log" Jan 05 23:42:55 crc kubenswrapper[5034]: I0105 23:42:55.688841 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-db-sync-s7t6l_999257cc-1aca-404f-834c-ddb12373b69e/heat-db-sync/0.log" Jan 05 23:42:55 crc kubenswrapper[5034]: I0105 23:42:55.792354 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-8ffd6ffc4-pmz56_19228d32-e4a4-408a-9951-730f63c4e7e7/heat-engine/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.052013 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d7cbf459d-njclw_3aac7508-cac0-47ed-8636-25c715c0b8b9/horizon-log/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.089648 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d7cbf459d-njclw_3aac7508-cac0-47ed-8636-25c715c0b8b9/horizon/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.161130 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-64fbcf98c8-v62hn_34d5e0e7-e3e5-4315-9706-70605d6fdff5/keystone-api/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.265920 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1e73955a-5f4a-4db0-a0ae-3844971de40d/kube-state-metrics/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.346505 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_34738e72-bd43-492f-be85-d38dffc26db8/adoption/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.422537 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6f858aaf-b558-44a2-ab96-dc3372e35537/memcached/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.603350 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5797d7d97c-twwkv_c26b529d-4e2d-489e-a15a-e9344e5cb5cd/neutron-api/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.603898 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5797d7d97c-twwkv_c26b529d-4e2d-489e-a15a-e9344e5cb5cd/neutron-httpd/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.825321 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_71c3861d-c8e7-48ba-a4bb-6d40369e62d9/nova-api-api/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.883345 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_71c3861d-c8e7-48ba-a4bb-6d40369e62d9/nova-api-log/0.log" Jan 05 23:42:56 crc kubenswrapper[5034]: I0105 23:42:56.922670 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_af2b754d-ffa3-4818-8e7e-519696b826fd/nova-cell0-conductor-conductor/0.log" Jan 05 23:42:57 crc kubenswrapper[5034]: I0105 23:42:57.115015 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1f8eec6a-8f7d-4ab9-b092-49a1709ba4ff/nova-cell1-conductor-conductor/0.log" Jan 05 23:42:57 crc kubenswrapper[5034]: I0105 23:42:57.138780 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_1f3fafcf-53f5-436e-8ea6-e5c7e709c5cd/nova-cell1-novncproxy-novncproxy/0.log" Jan 05 23:42:57 crc kubenswrapper[5034]: I0105 23:42:57.389509 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2975cea1-64d5-478a-93dc-bf0a82b75277/nova-metadata-log/0.log" Jan 05 23:42:57 crc kubenswrapper[5034]: I0105 23:42:57.512990 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9bc18f0e-03e7-493a-a861-454e4b8140c5/nova-scheduler-scheduler/0.log" Jan 05 23:42:57 crc kubenswrapper[5034]: I0105 23:42:57.604898 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6d47d78db7-prdg8_545f7399-0c28-4bd5-b8ee-dcfbe7511654/init/0.log" Jan 05 23:42:57 crc kubenswrapper[5034]: I0105 23:42:57.620159 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2975cea1-64d5-478a-93dc-bf0a82b75277/nova-metadata-metadata/0.log" Jan 05 23:42:57 crc kubenswrapper[5034]: I0105 23:42:57.807323 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6d47d78db7-prdg8_545f7399-0c28-4bd5-b8ee-dcfbe7511654/init/0.log" Jan 05 23:42:57 crc kubenswrapper[5034]: I0105 23:42:57.853442 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-5jdkb_7a44d744-4036-49a9-ba5d-dc55a15b65e8/init/0.log" Jan 05 23:42:57 crc kubenswrapper[5034]: I0105 23:42:57.868134 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6d47d78db7-prdg8_545f7399-0c28-4bd5-b8ee-dcfbe7511654/octavia-api-provider-agent/0.log" Jan 05 23:42:57 crc kubenswrapper[5034]: I0105 23:42:57.928842 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6d47d78db7-prdg8_545f7399-0c28-4bd5-b8ee-dcfbe7511654/octavia-api/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.045337 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-5jdkb_7a44d744-4036-49a9-ba5d-dc55a15b65e8/init/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.127713 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-5jdkb_7a44d744-4036-49a9-ba5d-dc55a15b65e8/octavia-healthmanager/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.143424 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-2tndz_4eb82084-cf41-47cb-96b1-0824f002f49a/init/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.304299 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-2tndz_4eb82084-cf41-47cb-96b1-0824f002f49a/init/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.320421 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-597bd57878-qnf52_083c512f-4112-4de4-a1a6-7ee6463e36bf/init/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.346881 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-2tndz_4eb82084-cf41-47cb-96b1-0824f002f49a/octavia-housekeeping/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.561929 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-597bd57878-qnf52_083c512f-4112-4de4-a1a6-7ee6463e36bf/octavia-amphora-httpd/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.598324 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-597bd57878-qnf52_083c512f-4112-4de4-a1a6-7ee6463e36bf/init/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.604790 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-pbtqd_040f9f95-2d60-448e-b698-041cdd081ec2/init/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.802983 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-pbtqd_040f9f95-2d60-448e-b698-041cdd081ec2/init/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.815323 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-pbtqd_040f9f95-2d60-448e-b698-041cdd081ec2/octavia-rsyslog/0.log" Jan 05 23:42:58 crc kubenswrapper[5034]: I0105 23:42:58.857927 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-vfjct_44398740-2fc7-4264-9614-fc0a5fe8e35e/init/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.064507 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-vfjct_44398740-2fc7-4264-9614-fc0a5fe8e35e/init/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.106408 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e1d0345a-603d-46a0-832f-94e63db6d310/mysql-bootstrap/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.186461 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-vfjct_44398740-2fc7-4264-9614-fc0a5fe8e35e/octavia-worker/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.372816 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e1d0345a-603d-46a0-832f-94e63db6d310/mysql-bootstrap/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.410093 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e1d0345a-603d-46a0-832f-94e63db6d310/galera/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.454262 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_07f5f475-a53a-467e-8c0f-365d09603cd0/mysql-bootstrap/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.593218 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_07f5f475-a53a-467e-8c0f-365d09603cd0/galera/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.611852 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_07f5f475-a53a-467e-8c0f-365d09603cd0/mysql-bootstrap/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.686507 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a84f3fae-075e-46ce-9d73-12611ea3eebd/openstackclient/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.798127 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-22df5_f52c5c97-57a6-4509-bc03-46bb124297d2/openstack-network-exporter/0.log" Jan 05 23:42:59 crc kubenswrapper[5034]: I0105 23:42:59.898480 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7kpp9_0c05c173-ed68-4f0e-a223-649e5c2cb5f3/ovsdb-server-init/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.060648 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7kpp9_0c05c173-ed68-4f0e-a223-649e5c2cb5f3/ovsdb-server/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.077409 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7kpp9_0c05c173-ed68-4f0e-a223-649e5c2cb5f3/ovsdb-server-init/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.090047 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-w6vqp_9fd531ae-4d59-4afe-aa01-6ad07a62b64c/ovn-controller/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.151636 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7kpp9_0c05c173-ed68-4f0e-a223-649e5c2cb5f3/ovs-vswitchd/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.246181 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_3c6fe902-ce44-49f5-9131-1e83280ca4c0/adoption/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.338311 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a/openstack-network-exporter/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.387001 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f4a9cf38-ed6a-4b7c-8d64-53ee11becc2a/ovn-northd/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.578511 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8908e22b-933f-43ef-a88d-058610136209/openstack-network-exporter/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.613679 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8908e22b-933f-43ef-a88d-058610136209/ovsdbserver-nb/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.643784 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_26e0a32c-15a9-48e9-9ecc-97bfdbcc923e/openstack-network-exporter/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.792355 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_687b551c-0fde-402a-abf5-076a664acdac/openstack-network-exporter/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.839183 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_26e0a32c-15a9-48e9-9ecc-97bfdbcc923e/ovsdbserver-nb/0.log" Jan 05 23:43:00 crc kubenswrapper[5034]: I0105 23:43:00.859606 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_687b551c-0fde-402a-abf5-076a664acdac/ovsdbserver-nb/0.log" Jan 05 23:43:01 crc kubenswrapper[5034]: I0105 23:43:01.040937 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_014bb009-3fd2-42e1-b51e-21437f54d5d4/openstack-network-exporter/0.log" Jan 05 23:43:01 crc kubenswrapper[5034]: I0105 23:43:01.076103 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_014bb009-3fd2-42e1-b51e-21437f54d5d4/ovsdbserver-sb/0.log" Jan 05 23:43:01 crc kubenswrapper[5034]: I0105 23:43:01.251486 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_dcce2e6b-c1ba-4d6b-a972-96f864bd3468/openstack-network-exporter/0.log" Jan 05 23:43:01 crc kubenswrapper[5034]: I0105 23:43:01.264588 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_dcce2e6b-c1ba-4d6b-a972-96f864bd3468/ovsdbserver-sb/0.log" Jan 05 23:43:01 crc kubenswrapper[5034]: I0105 23:43:01.585693 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_16add7fd-19c1-4795-b9ba-f4e692b65fb6/openstack-network-exporter/0.log" Jan 05 23:43:01 crc kubenswrapper[5034]: I0105 23:43:01.649960 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_16add7fd-19c1-4795-b9ba-f4e692b65fb6/ovsdbserver-sb/0.log" Jan 05 23:43:01 crc kubenswrapper[5034]: I0105 23:43:01.657685 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5df86d5b5d-4gxb8_4be72b04-172a-4ba5-83da-ff186babfdd1/placement-api/0.log" Jan 05 23:43:01 crc kubenswrapper[5034]: I0105 23:43:01.791878 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5df86d5b5d-4gxb8_4be72b04-172a-4ba5-83da-ff186babfdd1/placement-log/0.log" Jan 05 23:43:01 crc kubenswrapper[5034]: I0105 23:43:01.870557 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-chqn2r_0bfad148-5a3b-4f64-bfbc-eb42df610c76/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Jan 05 23:43:01 crc kubenswrapper[5034]: I0105 23:43:01.892685 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cwwtzj_c6c94074-1549-40f8-925b-64cbbc9a5a34/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.068804 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_72d2c93e-1be9-4e89-878e-3f802869275c/init-config-reloader/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.226205 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_72d2c93e-1be9-4e89-878e-3f802869275c/prometheus/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.231456 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_72d2c93e-1be9-4e89-878e-3f802869275c/config-reloader/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.254764 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_72d2c93e-1be9-4e89-878e-3f802869275c/init-config-reloader/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.299571 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_72d2c93e-1be9-4e89-878e-3f802869275c/thanos-sidecar/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.413118 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_672f6cae-debf-4932-9a31-f10b8ce91e93/setup-container/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.583957 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_672f6cae-debf-4932-9a31-f10b8ce91e93/setup-container/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.594719 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_672f6cae-debf-4932-9a31-f10b8ce91e93/rabbitmq/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.622721 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_908c96c4-673a-4ee7-a399-cca966e2281b/setup-container/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.771506 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_908c96c4-673a-4ee7-a399-cca966e2281b/setup-container/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.818671 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_908c96c4-673a-4ee7-a399-cca966e2281b/rabbitmq/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.924745 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-57d5547d58-mm9qr_707ebc8c-26aa-41c2-8dd4-14cf8df07600/proxy-server/0.log" Jan 05 23:43:02 crc kubenswrapper[5034]: I0105 23:43:02.943656 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-57d5547d58-mm9qr_707ebc8c-26aa-41c2-8dd4-14cf8df07600/proxy-httpd/0.log" Jan 05 23:43:03 crc kubenswrapper[5034]: I0105 23:43:03.027215 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4bzn4_521454f6-b1af-4a98-bf39-89711fb8840b/swift-ring-rebalance/0.log" Jan 05 23:43:27 crc kubenswrapper[5034]: I0105 23:43:27.831048 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-svkbp_6fd87fd1-9317-4df1-be11-c509d9643f84/manager/0.log" Jan 05 23:43:27 crc kubenswrapper[5034]: I0105 23:43:27.922125 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz_e164ea47-5917-4609-94fe-1befb86c13dc/util/0.log" Jan 05 23:43:28 crc kubenswrapper[5034]: I0105 23:43:28.132669 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz_e164ea47-5917-4609-94fe-1befb86c13dc/util/0.log" Jan 05 23:43:28 crc kubenswrapper[5034]: I0105 23:43:28.145751 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz_e164ea47-5917-4609-94fe-1befb86c13dc/pull/0.log" Jan 05 23:43:28 crc kubenswrapper[5034]: I0105 23:43:28.209054 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz_e164ea47-5917-4609-94fe-1befb86c13dc/pull/0.log" Jan 05 23:43:28 crc kubenswrapper[5034]: I0105 23:43:28.387789 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz_e164ea47-5917-4609-94fe-1befb86c13dc/util/0.log" Jan 05 23:43:28 crc kubenswrapper[5034]: I0105 23:43:28.457803 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz_e164ea47-5917-4609-94fe-1befb86c13dc/pull/0.log" Jan 05 23:43:28 crc kubenswrapper[5034]: I0105 23:43:28.489930 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c492f4a2acbe999979d81ac1ecd4689567cc191c6b559aea7f96522f5ddtbfz_e164ea47-5917-4609-94fe-1befb86c13dc/extract/0.log" Jan 05 23:43:28 crc kubenswrapper[5034]: I0105 23:43:28.673735 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-kgxlf_896815d6-faea-4d19-ac51-a51653fcb729/manager/0.log" Jan 05 23:43:28 crc kubenswrapper[5034]: I0105 23:43:28.713235 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-jvpk6_496eda61-616b-4c26-8a21-f7c32d44b301/manager/0.log" Jan 05 23:43:29 crc kubenswrapper[5034]: I0105 23:43:29.008178 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-xjmnm_eaf2fcd8-230f-414c-88dc-68ccb91b009e/manager/0.log" Jan 05 23:43:29 crc kubenswrapper[5034]: I0105 23:43:29.074410 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-z872k_1649d2ab-0b0e-475a-be2b-485845105d31/manager/0.log" Jan 05 23:43:29 crc kubenswrapper[5034]: I0105 23:43:29.268128 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-k2nrc_8cb3a336-40ad-44f2-8817-09d0d9807a1a/manager/0.log" Jan 05 23:43:29 crc kubenswrapper[5034]: I0105 23:43:29.576446 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-w9zgw_c0ad1066-4da0-43bb-8599-3bd8a5e445f4/manager/0.log" Jan 05 23:43:29 crc kubenswrapper[5034]: I0105 23:43:29.856292 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-dqv5l_40b7d083-f9c2-4114-9fea-7b205a0f2699/manager/0.log" Jan 05 23:43:29 crc kubenswrapper[5034]: I0105 23:43:29.877338 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-4lxms_7ab1c07d-8d12-4d71-b191-3334da2b04dd/manager/0.log" Jan 05 23:43:29 crc kubenswrapper[5034]: I0105 23:43:29.878064 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-qjlr7_55117dc3-bdf7-4967-830e-8465bd939669/manager/0.log" Jan 05 23:43:30 crc kubenswrapper[5034]: I0105 23:43:30.179121 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-cg7pq_3a788872-b35e-4386-97a0-55b225e77f3c/manager/0.log" Jan 05 23:43:30 crc kubenswrapper[5034]: I0105 23:43:30.185779 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-xj8gg_862bb25b-65e6-4866-a881-99ff200bd44c/manager/0.log" Jan 05 23:43:30 crc kubenswrapper[5034]: I0105 23:43:30.513890 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-trszk_523f1764-4ebe-4424-911d-e9e8b9a06576/manager/0.log" Jan 05 23:43:30 crc kubenswrapper[5034]: I0105 23:43:30.533121 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-brmhw_2e72dc34-e146-4759-92a9-472b505e452e/manager/0.log" Jan 05 23:43:30 crc kubenswrapper[5034]: I0105 23:43:30.652020 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-596c464d77km7w8_70d0025c-b385-4d1d-aaae-12d916644086/manager/0.log" Jan 05 23:43:31 crc kubenswrapper[5034]: I0105 23:43:31.083666 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5845bc5b8-d878g_1dd0be48-f659-4579-9170-4525ab5afc33/operator/0.log" Jan 05 23:43:31 crc kubenswrapper[5034]: I0105 23:43:31.351624 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-d8q77_b9d0176c-2e60-4822-930e-a59454554a09/registry-server/0.log" Jan 05 23:43:31 crc kubenswrapper[5034]: I0105 23:43:31.497022 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-qc578_f446cb2d-a8c4-460c-8538-3cd339280043/manager/0.log" Jan 05 23:43:31 crc kubenswrapper[5034]: I0105 23:43:31.597016 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-gqgz4_51008687-6437-41e7-be67-d0b8504af846/manager/0.log" Jan 05 23:43:32 crc kubenswrapper[5034]: I0105 23:43:32.079236 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zsckf_ec391381-ae2f-4f53-a3bc-42b7b47a3727/operator/0.log" Jan 05 23:43:32 crc kubenswrapper[5034]: I0105 23:43:32.269507 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-66brd_a5129b08-723a-4f31-aeca-bfa82f192ca6/manager/0.log" Jan 05 23:43:32 crc kubenswrapper[5034]: I0105 23:43:32.446380 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-b9x9f_7f51b169-2963-4bcc-881f-ad5e0eb8ebd7/manager/0.log" Jan 05 23:43:32 crc kubenswrapper[5034]: I0105 23:43:32.546278 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-49jv5_c4c31dca-5e18-4a1f-a8ea-f10abb68d479/manager/0.log" Jan 05 23:43:32 crc kubenswrapper[5034]: I0105 23:43:32.607436 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-zbzfs_cf91ac7d-1a00-4658-9270-8a7186602088/manager/0.log" Jan 05 23:43:33 crc kubenswrapper[5034]: I0105 23:43:33.184953 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-555f86cbf8-jsp5s_23276bd0-4dde-4a42-8c97-481788b2c35c/manager/0.log" Jan 05 23:43:52 crc kubenswrapper[5034]: I0105 23:43:52.507173 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-j75n2_648556f6-8682-4cf8-beaa-bdf944bb7f14/control-plane-machine-set-operator/0.log" Jan 05 23:43:52 crc kubenswrapper[5034]: I0105 23:43:52.681712 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-94ljp_579527a6-1737-40f2-8cfa-1798cc770142/kube-rbac-proxy/0.log" Jan 05 23:43:52 crc kubenswrapper[5034]: I0105 23:43:52.711594 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-94ljp_579527a6-1737-40f2-8cfa-1798cc770142/machine-api-operator/0.log" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.355057 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sl48z"] Jan 05 23:44:03 crc kubenswrapper[5034]: E0105 23:44:03.356047 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8e8b1a-edb4-4c23-b879-a35caee1570a" containerName="container-00" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.356062 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8e8b1a-edb4-4c23-b879-a35caee1570a" containerName="container-00" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.356282 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8e8b1a-edb4-4c23-b879-a35caee1570a" containerName="container-00" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.357802 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.375788 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl48z"] Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.468421 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmb7k\" (UniqueName: \"kubernetes.io/projected/32f554fd-3af0-43a7-bf49-4a06b692ca13-kube-api-access-bmb7k\") pod \"redhat-marketplace-sl48z\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.468783 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-catalog-content\") pod \"redhat-marketplace-sl48z\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.469210 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-utilities\") pod \"redhat-marketplace-sl48z\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.571737 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-utilities\") pod \"redhat-marketplace-sl48z\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.571951 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmb7k\" (UniqueName: \"kubernetes.io/projected/32f554fd-3af0-43a7-bf49-4a06b692ca13-kube-api-access-bmb7k\") pod \"redhat-marketplace-sl48z\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.572024 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-catalog-content\") pod \"redhat-marketplace-sl48z\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.572309 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-utilities\") pod \"redhat-marketplace-sl48z\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.572677 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-catalog-content\") pod \"redhat-marketplace-sl48z\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.592099 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmb7k\" (UniqueName: \"kubernetes.io/projected/32f554fd-3af0-43a7-bf49-4a06b692ca13-kube-api-access-bmb7k\") pod \"redhat-marketplace-sl48z\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:03 crc kubenswrapper[5034]: I0105 23:44:03.694341 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:04 crc kubenswrapper[5034]: I0105 23:44:04.220299 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl48z"] Jan 05 23:44:04 crc kubenswrapper[5034]: I0105 23:44:04.441774 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-dxxgk_4f0c5ff9-b60a-43a3-bfbc-4790b7622531/cert-manager-controller/0.log" Jan 05 23:44:04 crc kubenswrapper[5034]: I0105 23:44:04.699315 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-twtrf_e28a75e2-b589-439a-ad3e-95fbe1db1d9c/cert-manager-cainjector/0.log" Jan 05 23:44:04 crc kubenswrapper[5034]: I0105 23:44:04.739030 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-nf4xl_d51296ce-60f9-44a1-8fce-7915e000ee74/cert-manager-webhook/0.log" Jan 05 23:44:05 crc kubenswrapper[5034]: I0105 23:44:05.144481 5034 generic.go:334] "Generic (PLEG): container finished" podID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerID="adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902" exitCode=0 Jan 05 23:44:05 crc kubenswrapper[5034]: I0105 23:44:05.144539 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl48z" event={"ID":"32f554fd-3af0-43a7-bf49-4a06b692ca13","Type":"ContainerDied","Data":"adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902"} Jan 05 23:44:05 crc kubenswrapper[5034]: I0105 23:44:05.144571 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl48z" event={"ID":"32f554fd-3af0-43a7-bf49-4a06b692ca13","Type":"ContainerStarted","Data":"0cb590d8eb580e7531d908a73cf72e894359a13f0d4341c699a19ef836b208ad"} Jan 05 23:44:05 crc kubenswrapper[5034]: I0105 23:44:05.147165 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 23:44:06 crc kubenswrapper[5034]: I0105 23:44:06.156442 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl48z" event={"ID":"32f554fd-3af0-43a7-bf49-4a06b692ca13","Type":"ContainerStarted","Data":"705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e"} Jan 05 23:44:07 crc kubenswrapper[5034]: I0105 23:44:07.166992 5034 generic.go:334] "Generic (PLEG): container finished" podID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerID="705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e" exitCode=0 Jan 05 23:44:07 crc kubenswrapper[5034]: I0105 23:44:07.167108 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl48z" event={"ID":"32f554fd-3af0-43a7-bf49-4a06b692ca13","Type":"ContainerDied","Data":"705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e"} Jan 05 23:44:09 crc kubenswrapper[5034]: I0105 23:44:09.191249 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl48z" event={"ID":"32f554fd-3af0-43a7-bf49-4a06b692ca13","Type":"ContainerStarted","Data":"e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b"} Jan 05 23:44:09 crc kubenswrapper[5034]: I0105 23:44:09.212783 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sl48z" podStartSLOduration=3.472994514 podStartE2EDuration="6.212754207s" podCreationTimestamp="2026-01-05 23:44:03 +0000 UTC" firstStartedPulling="2026-01-05 23:44:05.146884057 +0000 UTC m=+6737.518883496" lastFinishedPulling="2026-01-05 23:44:07.88664375 +0000 UTC m=+6740.258643189" observedRunningTime="2026-01-05 23:44:09.212635134 +0000 UTC m=+6741.584634573" watchObservedRunningTime="2026-01-05 23:44:09.212754207 +0000 UTC m=+6741.584753656" Jan 05 23:44:13 crc kubenswrapper[5034]: I0105 23:44:13.694704 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:13 crc kubenswrapper[5034]: I0105 23:44:13.695303 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:13 crc kubenswrapper[5034]: I0105 23:44:13.764382 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:14 crc kubenswrapper[5034]: I0105 23:44:14.286918 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:14 crc kubenswrapper[5034]: I0105 23:44:14.360917 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl48z"] Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.251676 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sl48z" podUID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerName="registry-server" containerID="cri-o://e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b" gracePeriod=2 Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.795461 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.885461 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmb7k\" (UniqueName: \"kubernetes.io/projected/32f554fd-3af0-43a7-bf49-4a06b692ca13-kube-api-access-bmb7k\") pod \"32f554fd-3af0-43a7-bf49-4a06b692ca13\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.885549 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-utilities\") pod \"32f554fd-3af0-43a7-bf49-4a06b692ca13\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.885600 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-catalog-content\") pod \"32f554fd-3af0-43a7-bf49-4a06b692ca13\" (UID: \"32f554fd-3af0-43a7-bf49-4a06b692ca13\") " Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.886679 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-utilities" (OuterVolumeSpecName: "utilities") pod "32f554fd-3af0-43a7-bf49-4a06b692ca13" (UID: "32f554fd-3af0-43a7-bf49-4a06b692ca13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.912426 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32f554fd-3af0-43a7-bf49-4a06b692ca13" (UID: "32f554fd-3af0-43a7-bf49-4a06b692ca13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.918798 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f554fd-3af0-43a7-bf49-4a06b692ca13-kube-api-access-bmb7k" (OuterVolumeSpecName: "kube-api-access-bmb7k") pod "32f554fd-3af0-43a7-bf49-4a06b692ca13" (UID: "32f554fd-3af0-43a7-bf49-4a06b692ca13"). InnerVolumeSpecName "kube-api-access-bmb7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.988627 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmb7k\" (UniqueName: \"kubernetes.io/projected/32f554fd-3af0-43a7-bf49-4a06b692ca13-kube-api-access-bmb7k\") on node \"crc\" DevicePath \"\"" Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.988693 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:44:16 crc kubenswrapper[5034]: I0105 23:44:16.988708 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f554fd-3af0-43a7-bf49-4a06b692ca13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.264818 5034 generic.go:334] "Generic (PLEG): container finished" podID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerID="e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b" exitCode=0 Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.264883 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl48z" event={"ID":"32f554fd-3af0-43a7-bf49-4a06b692ca13","Type":"ContainerDied","Data":"e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b"} Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.265209 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl48z" event={"ID":"32f554fd-3af0-43a7-bf49-4a06b692ca13","Type":"ContainerDied","Data":"0cb590d8eb580e7531d908a73cf72e894359a13f0d4341c699a19ef836b208ad"} Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.265269 5034 scope.go:117] "RemoveContainer" containerID="e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.264909 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl48z" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.299854 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl48z"] Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.299894 5034 scope.go:117] "RemoveContainer" containerID="705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.323375 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl48z"] Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.323388 5034 scope.go:117] "RemoveContainer" containerID="adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.381800 5034 scope.go:117] "RemoveContainer" containerID="e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b" Jan 05 23:44:17 crc kubenswrapper[5034]: E0105 23:44:17.382629 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b\": container with ID starting with e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b not found: ID does not exist" containerID="e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.382676 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b"} err="failed to get container status \"e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b\": rpc error: code = NotFound desc = could not find container \"e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b\": container with ID starting with e3dbfe832447fb1a05d2d7163cca02f1c155f12b282a7ff566443d81058d1c7b not found: ID does not exist" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.382706 5034 scope.go:117] "RemoveContainer" containerID="705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e" Jan 05 23:44:17 crc kubenswrapper[5034]: E0105 23:44:17.383167 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e\": container with ID starting with 705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e not found: ID does not exist" containerID="705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.383210 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e"} err="failed to get container status \"705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e\": rpc error: code = NotFound desc = could not find container \"705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e\": container with ID starting with 705272e15b5b36560f261972ed8d019056a368137264526961aa048ec4bce51e not found: ID does not exist" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.383240 5034 scope.go:117] "RemoveContainer" containerID="adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902" Jan 05 23:44:17 crc kubenswrapper[5034]: E0105 23:44:17.383551 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902\": container with ID starting with adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902 not found: ID does not exist" containerID="adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.383583 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902"} err="failed to get container status \"adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902\": rpc error: code = NotFound desc = could not find container \"adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902\": container with ID starting with adc4d1a29b4a0446112d8e71bab0f3e245ebabdbf1fe1c1e6eab18c67cfac902 not found: ID does not exist" Jan 05 23:44:17 crc kubenswrapper[5034]: I0105 23:44:17.850299 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f554fd-3af0-43a7-bf49-4a06b692ca13" path="/var/lib/kubelet/pods/32f554fd-3af0-43a7-bf49-4a06b692ca13/volumes" Jan 05 23:44:18 crc kubenswrapper[5034]: I0105 23:44:18.333434 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-f97xv_9899044f-5e88-4777-b627-f7dcc60960a5/nmstate-console-plugin/0.log" Jan 05 23:44:18 crc kubenswrapper[5034]: I0105 23:44:18.518700 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-65khr_d6e6c223-89b8-4e35-a4cf-1442486c98dd/kube-rbac-proxy/0.log" Jan 05 23:44:18 crc kubenswrapper[5034]: I0105 23:44:18.534379 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7cn7q_04881f29-9640-4518-bf55-f893f4f27c26/nmstate-handler/0.log" Jan 05 23:44:18 crc kubenswrapper[5034]: I0105 23:44:18.648890 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-65khr_d6e6c223-89b8-4e35-a4cf-1442486c98dd/nmstate-metrics/0.log" Jan 05 23:44:18 crc kubenswrapper[5034]: I0105 23:44:18.739817 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-9fngq_87c765d8-d039-4341-9980-e2b22b54ceac/nmstate-operator/0.log" Jan 05 23:44:18 crc kubenswrapper[5034]: I0105 23:44:18.888050 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-bw8c5_f1fd0590-a9f0-4da7-ad04-5e9b440bf8bb/nmstate-webhook/0.log" Jan 05 23:44:33 crc kubenswrapper[5034]: I0105 23:44:33.639569 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-nn49h_dbf8030a-b9cc-402c-90ae-51ee7b7e0883/kube-rbac-proxy/0.log" Jan 05 23:44:33 crc kubenswrapper[5034]: I0105 23:44:33.915008 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-frr-files/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.033894 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-nn49h_dbf8030a-b9cc-402c-90ae-51ee7b7e0883/controller/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.183192 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-frr-files/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.200352 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-metrics/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.232876 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-reloader/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.248894 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-reloader/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.519150 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-metrics/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.540178 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-reloader/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.567419 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-frr-files/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.598792 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-metrics/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.725640 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-frr-files/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.742492 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-reloader/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.758525 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/cp-metrics/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.835284 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/controller/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.955321 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/frr-metrics/0.log" Jan 05 23:44:34 crc kubenswrapper[5034]: I0105 23:44:34.990412 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/kube-rbac-proxy/0.log" Jan 05 23:44:35 crc kubenswrapper[5034]: I0105 23:44:35.048616 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/kube-rbac-proxy-frr/0.log" Jan 05 23:44:35 crc kubenswrapper[5034]: I0105 23:44:35.169199 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/reloader/0.log" Jan 05 23:44:35 crc kubenswrapper[5034]: I0105 23:44:35.219942 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-5cxjt_6c30e9a6-f8a5-471b-a98a-6488b00be9b3/frr-k8s-webhook-server/0.log" Jan 05 23:44:35 crc kubenswrapper[5034]: I0105 23:44:35.486151 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-59f99c4667-mtskn_8200b031-48ef-4b6c-8ee2-bddf6e8cde98/manager/0.log" Jan 05 23:44:35 crc kubenswrapper[5034]: I0105 23:44:35.661739 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6d7c77b87b-md8xj_f76a0977-f500-4b92-8eee-304a3c7385ad/webhook-server/0.log" Jan 05 23:44:35 crc kubenswrapper[5034]: I0105 23:44:35.688803 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k7tnh_7560ef5c-bc0a-42e7-9a1a-e610555272ad/kube-rbac-proxy/0.log" Jan 05 23:44:36 crc kubenswrapper[5034]: I0105 23:44:36.688768 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k7tnh_7560ef5c-bc0a-42e7-9a1a-e610555272ad/speaker/0.log" Jan 05 23:44:37 crc kubenswrapper[5034]: I0105 23:44:37.917209 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4hzvh_88af2613-0081-477e-983f-1d8a7a35f282/frr/0.log" Jan 05 23:44:48 crc kubenswrapper[5034]: I0105 23:44:48.635806 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq_6178a12f-8e0f-4038-9bd2-e8a21d4dcd22/util/0.log" Jan 05 23:44:48 crc kubenswrapper[5034]: I0105 23:44:48.864099 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq_6178a12f-8e0f-4038-9bd2-e8a21d4dcd22/util/0.log" Jan 05 23:44:48 crc kubenswrapper[5034]: I0105 23:44:48.921523 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq_6178a12f-8e0f-4038-9bd2-e8a21d4dcd22/pull/0.log" Jan 05 23:44:48 crc kubenswrapper[5034]: I0105 23:44:48.986435 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq_6178a12f-8e0f-4038-9bd2-e8a21d4dcd22/pull/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.120826 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq_6178a12f-8e0f-4038-9bd2-e8a21d4dcd22/extract/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.135293 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq_6178a12f-8e0f-4038-9bd2-e8a21d4dcd22/pull/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.143541 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931armrjq_6178a12f-8e0f-4038-9bd2-e8a21d4dcd22/util/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.341784 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7_e8dba0a8-44e1-4a23-b14a-85826a656669/util/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.519197 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7_e8dba0a8-44e1-4a23-b14a-85826a656669/util/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.536098 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7_e8dba0a8-44e1-4a23-b14a-85826a656669/pull/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.577536 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7_e8dba0a8-44e1-4a23-b14a-85826a656669/pull/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.725251 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7_e8dba0a8-44e1-4a23-b14a-85826a656669/util/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.732862 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7_e8dba0a8-44e1-4a23-b14a-85826a656669/pull/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.796108 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pn9r7_e8dba0a8-44e1-4a23-b14a-85826a656669/extract/0.log" Jan 05 23:44:49 crc kubenswrapper[5034]: I0105 23:44:49.941015 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4_5b029c07-fd45-41d1-a25f-9a0653f3c70b/util/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.094000 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4_5b029c07-fd45-41d1-a25f-9a0653f3c70b/util/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.112923 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4_5b029c07-fd45-41d1-a25f-9a0653f3c70b/pull/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.149284 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4_5b029c07-fd45-41d1-a25f-9a0653f3c70b/pull/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.303861 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4_5b029c07-fd45-41d1-a25f-9a0653f3c70b/extract/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.305633 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4_5b029c07-fd45-41d1-a25f-9a0653f3c70b/pull/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.363264 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fqfn4_5b029c07-fd45-41d1-a25f-9a0653f3c70b/util/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.469009 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.469093 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.492142 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922_737ad453-4b93-48b2-aa08-c3e69e6f81e7/util/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.689413 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922_737ad453-4b93-48b2-aa08-c3e69e6f81e7/pull/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.696753 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922_737ad453-4b93-48b2-aa08-c3e69e6f81e7/pull/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.702048 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922_737ad453-4b93-48b2-aa08-c3e69e6f81e7/util/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.932918 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922_737ad453-4b93-48b2-aa08-c3e69e6f81e7/util/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.965304 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922_737ad453-4b93-48b2-aa08-c3e69e6f81e7/pull/0.log" Jan 05 23:44:50 crc kubenswrapper[5034]: I0105 23:44:50.969698 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ls922_737ad453-4b93-48b2-aa08-c3e69e6f81e7/extract/0.log" Jan 05 23:44:51 crc kubenswrapper[5034]: I0105 23:44:51.145494 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-775ks_43c22fec-05cd-4506-a6e4-0508d9a3251a/extract-utilities/0.log" Jan 05 23:44:51 crc kubenswrapper[5034]: I0105 23:44:51.302837 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-775ks_43c22fec-05cd-4506-a6e4-0508d9a3251a/extract-utilities/0.log" Jan 05 23:44:51 crc kubenswrapper[5034]: I0105 23:44:51.334563 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-775ks_43c22fec-05cd-4506-a6e4-0508d9a3251a/extract-content/0.log" Jan 05 23:44:51 crc kubenswrapper[5034]: I0105 23:44:51.349784 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-775ks_43c22fec-05cd-4506-a6e4-0508d9a3251a/extract-content/0.log" Jan 05 23:44:51 crc kubenswrapper[5034]: I0105 23:44:51.494512 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-775ks_43c22fec-05cd-4506-a6e4-0508d9a3251a/extract-content/0.log" Jan 05 23:44:51 crc kubenswrapper[5034]: I0105 23:44:51.520387 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-775ks_43c22fec-05cd-4506-a6e4-0508d9a3251a/extract-utilities/0.log" Jan 05 23:44:51 crc kubenswrapper[5034]: I0105 23:44:51.733000 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rslnm_7be6f642-7b7e-4b18-a3f6-184fca000d37/extract-utilities/0.log" Jan 05 23:44:51 crc kubenswrapper[5034]: I0105 23:44:51.928441 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rslnm_7be6f642-7b7e-4b18-a3f6-184fca000d37/extract-content/0.log" Jan 05 23:44:51 crc kubenswrapper[5034]: I0105 23:44:51.967799 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rslnm_7be6f642-7b7e-4b18-a3f6-184fca000d37/extract-utilities/0.log" Jan 05 23:44:51 crc kubenswrapper[5034]: I0105 23:44:51.973284 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rslnm_7be6f642-7b7e-4b18-a3f6-184fca000d37/extract-content/0.log" Jan 05 23:44:52 crc kubenswrapper[5034]: I0105 23:44:52.203064 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rslnm_7be6f642-7b7e-4b18-a3f6-184fca000d37/extract-utilities/0.log" Jan 05 23:44:52 crc kubenswrapper[5034]: I0105 23:44:52.238443 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rslnm_7be6f642-7b7e-4b18-a3f6-184fca000d37/extract-content/0.log" Jan 05 23:44:52 crc kubenswrapper[5034]: I0105 23:44:52.450505 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wvbkt_29925def-614b-4b01-ad4f-056d5f252000/marketplace-operator/0.log" Jan 05 23:44:52 crc kubenswrapper[5034]: I0105 23:44:52.656860 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-28ptj_1d105868-804c-47f7-a59c-d289cf852378/extract-utilities/0.log" Jan 05 23:44:52 crc kubenswrapper[5034]: I0105 23:44:52.727963 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-775ks_43c22fec-05cd-4506-a6e4-0508d9a3251a/registry-server/0.log" Jan 05 23:44:52 crc kubenswrapper[5034]: I0105 23:44:52.875312 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-28ptj_1d105868-804c-47f7-a59c-d289cf852378/extract-utilities/0.log" Jan 05 23:44:52 crc kubenswrapper[5034]: I0105 23:44:52.922348 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-28ptj_1d105868-804c-47f7-a59c-d289cf852378/extract-content/0.log" Jan 05 23:44:52 crc kubenswrapper[5034]: I0105 23:44:52.931764 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-28ptj_1d105868-804c-47f7-a59c-d289cf852378/extract-content/0.log" Jan 05 23:44:53 crc kubenswrapper[5034]: I0105 23:44:53.145840 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-28ptj_1d105868-804c-47f7-a59c-d289cf852378/extract-utilities/0.log" Jan 05 23:44:53 crc kubenswrapper[5034]: I0105 23:44:53.232119 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-28ptj_1d105868-804c-47f7-a59c-d289cf852378/extract-content/0.log" Jan 05 23:44:53 crc kubenswrapper[5034]: I0105 23:44:53.434106 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9zvx5_7196d3d2-86be-4905-ba31-121f2e3e9c8a/extract-utilities/0.log" Jan 05 23:44:53 crc kubenswrapper[5034]: I0105 23:44:53.653117 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9zvx5_7196d3d2-86be-4905-ba31-121f2e3e9c8a/extract-content/0.log" Jan 05 23:44:53 crc kubenswrapper[5034]: I0105 23:44:53.683180 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9zvx5_7196d3d2-86be-4905-ba31-121f2e3e9c8a/extract-utilities/0.log" Jan 05 23:44:53 crc kubenswrapper[5034]: I0105 23:44:53.695989 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9zvx5_7196d3d2-86be-4905-ba31-121f2e3e9c8a/extract-content/0.log" Jan 05 23:44:53 crc kubenswrapper[5034]: I0105 23:44:53.783581 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-28ptj_1d105868-804c-47f7-a59c-d289cf852378/registry-server/0.log" Jan 05 23:44:53 crc kubenswrapper[5034]: I0105 23:44:53.848619 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rslnm_7be6f642-7b7e-4b18-a3f6-184fca000d37/registry-server/0.log" Jan 05 23:44:53 crc kubenswrapper[5034]: I0105 23:44:53.882728 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9zvx5_7196d3d2-86be-4905-ba31-121f2e3e9c8a/extract-content/0.log" Jan 05 23:44:53 crc kubenswrapper[5034]: I0105 23:44:53.940456 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9zvx5_7196d3d2-86be-4905-ba31-121f2e3e9c8a/extract-utilities/0.log" Jan 05 23:44:54 crc kubenswrapper[5034]: I0105 23:44:54.839356 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9zvx5_7196d3d2-86be-4905-ba31-121f2e3e9c8a/registry-server/0.log" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.153323 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78"] Jan 05 23:45:00 crc kubenswrapper[5034]: E0105 23:45:00.154400 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerName="registry-server" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.154418 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerName="registry-server" Jan 05 23:45:00 crc kubenswrapper[5034]: E0105 23:45:00.154457 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerName="extract-content" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.154466 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerName="extract-content" Jan 05 23:45:00 crc kubenswrapper[5034]: E0105 23:45:00.154489 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerName="extract-utilities" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.154495 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerName="extract-utilities" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.154700 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f554fd-3af0-43a7-bf49-4a06b692ca13" containerName="registry-server" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.155559 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.158219 5034 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.160925 5034 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.165754 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78"] Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.280162 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04fa99ca-d398-4a54-992d-b46493359c2e-config-volume\") pod \"collect-profiles-29460945-4qh78\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.280471 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04fa99ca-d398-4a54-992d-b46493359c2e-secret-volume\") pod \"collect-profiles-29460945-4qh78\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.280564 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzss\" (UniqueName: \"kubernetes.io/projected/04fa99ca-d398-4a54-992d-b46493359c2e-kube-api-access-sqzss\") pod \"collect-profiles-29460945-4qh78\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.383104 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzss\" (UniqueName: \"kubernetes.io/projected/04fa99ca-d398-4a54-992d-b46493359c2e-kube-api-access-sqzss\") pod \"collect-profiles-29460945-4qh78\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.383207 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04fa99ca-d398-4a54-992d-b46493359c2e-config-volume\") pod \"collect-profiles-29460945-4qh78\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.383404 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04fa99ca-d398-4a54-992d-b46493359c2e-secret-volume\") pod \"collect-profiles-29460945-4qh78\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.384116 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04fa99ca-d398-4a54-992d-b46493359c2e-config-volume\") pod \"collect-profiles-29460945-4qh78\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.389801 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04fa99ca-d398-4a54-992d-b46493359c2e-secret-volume\") pod \"collect-profiles-29460945-4qh78\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.401807 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzss\" (UniqueName: \"kubernetes.io/projected/04fa99ca-d398-4a54-992d-b46493359c2e-kube-api-access-sqzss\") pod \"collect-profiles-29460945-4qh78\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.482944 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:00 crc kubenswrapper[5034]: I0105 23:45:00.955642 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78"] Jan 05 23:45:01 crc kubenswrapper[5034]: I0105 23:45:01.050420 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-d63d-account-create-update-ddp5l"] Jan 05 23:45:01 crc kubenswrapper[5034]: I0105 23:45:01.061657 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-d63d-account-create-update-ddp5l"] Jan 05 23:45:01 crc kubenswrapper[5034]: I0105 23:45:01.076249 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-5zqpd"] Jan 05 23:45:01 crc kubenswrapper[5034]: I0105 23:45:01.090945 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-5zqpd"] Jan 05 23:45:01 crc kubenswrapper[5034]: I0105 23:45:01.685879 5034 generic.go:334] "Generic (PLEG): container finished" podID="04fa99ca-d398-4a54-992d-b46493359c2e" containerID="968b58c2e27b5abb04793ad538e163650f9d9913a4e4c28f6c370c31b27bedbe" exitCode=0 Jan 05 23:45:01 crc kubenswrapper[5034]: I0105 23:45:01.685975 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" event={"ID":"04fa99ca-d398-4a54-992d-b46493359c2e","Type":"ContainerDied","Data":"968b58c2e27b5abb04793ad538e163650f9d9913a4e4c28f6c370c31b27bedbe"} Jan 05 23:45:01 crc kubenswrapper[5034]: I0105 23:45:01.686312 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" event={"ID":"04fa99ca-d398-4a54-992d-b46493359c2e","Type":"ContainerStarted","Data":"1aeeedb2dd95fe3c67e55f77e66a953a549a9250554d4a7798d4683631fa2fce"} Jan 05 23:45:01 crc kubenswrapper[5034]: I0105 23:45:01.858851 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b977528-7379-4e3d-b770-31df686e4fdc" path="/var/lib/kubelet/pods/1b977528-7379-4e3d-b770-31df686e4fdc/volumes" Jan 05 23:45:01 crc kubenswrapper[5034]: I0105 23:45:01.859868 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="591b10b9-dfcd-46d4-818f-dcfb0fea7ef4" path="/var/lib/kubelet/pods/591b10b9-dfcd-46d4-818f-dcfb0fea7ef4/volumes" Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.083876 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.144120 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04fa99ca-d398-4a54-992d-b46493359c2e-secret-volume\") pod \"04fa99ca-d398-4a54-992d-b46493359c2e\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.144382 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04fa99ca-d398-4a54-992d-b46493359c2e-config-volume\") pod \"04fa99ca-d398-4a54-992d-b46493359c2e\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.144470 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqzss\" (UniqueName: \"kubernetes.io/projected/04fa99ca-d398-4a54-992d-b46493359c2e-kube-api-access-sqzss\") pod \"04fa99ca-d398-4a54-992d-b46493359c2e\" (UID: \"04fa99ca-d398-4a54-992d-b46493359c2e\") " Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.145058 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04fa99ca-d398-4a54-992d-b46493359c2e-config-volume" (OuterVolumeSpecName: "config-volume") pod "04fa99ca-d398-4a54-992d-b46493359c2e" (UID: "04fa99ca-d398-4a54-992d-b46493359c2e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.151384 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04fa99ca-d398-4a54-992d-b46493359c2e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04fa99ca-d398-4a54-992d-b46493359c2e" (UID: "04fa99ca-d398-4a54-992d-b46493359c2e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.151441 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04fa99ca-d398-4a54-992d-b46493359c2e-kube-api-access-sqzss" (OuterVolumeSpecName: "kube-api-access-sqzss") pod "04fa99ca-d398-4a54-992d-b46493359c2e" (UID: "04fa99ca-d398-4a54-992d-b46493359c2e"). InnerVolumeSpecName "kube-api-access-sqzss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.247377 5034 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04fa99ca-d398-4a54-992d-b46493359c2e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.247423 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqzss\" (UniqueName: \"kubernetes.io/projected/04fa99ca-d398-4a54-992d-b46493359c2e-kube-api-access-sqzss\") on node \"crc\" DevicePath \"\"" Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.247436 5034 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04fa99ca-d398-4a54-992d-b46493359c2e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.704674 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" event={"ID":"04fa99ca-d398-4a54-992d-b46493359c2e","Type":"ContainerDied","Data":"1aeeedb2dd95fe3c67e55f77e66a953a549a9250554d4a7798d4683631fa2fce"} Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.704916 5034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aeeedb2dd95fe3c67e55f77e66a953a549a9250554d4a7798d4683631fa2fce" Jan 05 23:45:03 crc kubenswrapper[5034]: I0105 23:45:03.704749 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460945-4qh78" Jan 05 23:45:04 crc kubenswrapper[5034]: I0105 23:45:04.219478 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf"] Jan 05 23:45:04 crc kubenswrapper[5034]: I0105 23:45:04.234039 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460900-xsmhf"] Jan 05 23:45:05 crc kubenswrapper[5034]: I0105 23:45:05.850225 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fb373d-1a70-40f5-b36a-dc74973a135f" path="/var/lib/kubelet/pods/37fb373d-1a70-40f5-b36a-dc74973a135f/volumes" Jan 05 23:45:05 crc kubenswrapper[5034]: I0105 23:45:05.932927 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-w7qpj_9ab76c0a-964d-4bed-a8d1-5fd30f83d707/prometheus-operator/0.log" Jan 05 23:45:06 crc kubenswrapper[5034]: I0105 23:45:06.142018 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-555df57bb9-86mxm_2fdec36c-ddb5-4cfb-8414-5464dd814235/prometheus-operator-admission-webhook/0.log" Jan 05 23:45:06 crc kubenswrapper[5034]: I0105 23:45:06.184982 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-555df57bb9-pt6qf_cf2d5e61-2b14-473b-9d9d-082751280399/prometheus-operator-admission-webhook/0.log" Jan 05 23:45:06 crc kubenswrapper[5034]: I0105 23:45:06.328773 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4klvr_2075b65e-4db7-4345-8739-b9d2db8b4148/operator/0.log" Jan 05 23:45:06 crc kubenswrapper[5034]: I0105 23:45:06.426280 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6d7gn_55cecb6b-56b7-4d48-a9a4-a5b2f74e164c/perses-operator/0.log" Jan 05 23:45:14 crc kubenswrapper[5034]: I0105 23:45:14.238376 5034 scope.go:117] "RemoveContainer" containerID="b73044afed82a0f6a824a956e46818b6bdc705af5f1c0c2d38d82078551503ef" Jan 05 23:45:14 crc kubenswrapper[5034]: I0105 23:45:14.275575 5034 scope.go:117] "RemoveContainer" containerID="e3a2eea89a6d50a81b766d77fe26e1952d7a0387d886e32366855808048924ec" Jan 05 23:45:14 crc kubenswrapper[5034]: I0105 23:45:14.344755 5034 scope.go:117] "RemoveContainer" containerID="e0ef74aaa394c57dd2f92409463f857358259df0ef6ae5f9f19771a84c43b921" Jan 05 23:45:14 crc kubenswrapper[5034]: I0105 23:45:14.417149 5034 scope.go:117] "RemoveContainer" containerID="3914db75d254f1556b8d895d4c5980de455100df7d5493a4f365d665799c3efe" Jan 05 23:45:14 crc kubenswrapper[5034]: I0105 23:45:14.441622 5034 scope.go:117] "RemoveContainer" containerID="f55953e92b6565d431c7320114197a89a82b81939defe67c75d4f29903b50d3e" Jan 05 23:45:14 crc kubenswrapper[5034]: I0105 23:45:14.470397 5034 scope.go:117] "RemoveContainer" containerID="74daa58ceb606ae0a82b66318bcd8f40ece915832c76efaec898bbf1f2cbec42" Jan 05 23:45:14 crc kubenswrapper[5034]: I0105 23:45:14.518887 5034 scope.go:117] "RemoveContainer" containerID="74b3f867d3d738cdc14271a06f29e017c7d9c0b3ce2e69acb5e6ee2434529458" Jan 05 23:45:15 crc kubenswrapper[5034]: I0105 23:45:15.042414 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-s7t6l"] Jan 05 23:45:15 crc kubenswrapper[5034]: I0105 23:45:15.055942 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-s7t6l"] Jan 05 23:45:15 crc kubenswrapper[5034]: I0105 23:45:15.849657 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="999257cc-1aca-404f-834c-ddb12373b69e" path="/var/lib/kubelet/pods/999257cc-1aca-404f-834c-ddb12373b69e/volumes" Jan 05 23:45:19 crc kubenswrapper[5034]: E0105 23:45:19.355540 5034 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.156:60188->38.102.83.156:40825: read tcp 38.102.83.156:60188->38.102.83.156:40825: read: connection reset by peer Jan 05 23:45:20 crc kubenswrapper[5034]: I0105 23:45:20.468486 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:45:20 crc kubenswrapper[5034]: I0105 23:45:20.468544 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:45:37 crc kubenswrapper[5034]: E0105 23:45:37.724737 5034 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.156:47796->38.102.83.156:40825: read tcp 38.102.83.156:47796->38.102.83.156:40825: read: connection reset by peer Jan 05 23:45:37 crc kubenswrapper[5034]: E0105 23:45:37.724783 5034 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.156:47796->38.102.83.156:40825: write tcp 38.102.83.156:47796->38.102.83.156:40825: write: broken pipe Jan 05 23:45:50 crc kubenswrapper[5034]: I0105 23:45:50.468775 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:45:50 crc kubenswrapper[5034]: I0105 23:45:50.469549 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:45:50 crc kubenswrapper[5034]: I0105 23:45:50.469612 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 23:45:50 crc kubenswrapper[5034]: I0105 23:45:50.470782 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"caec8cb16ece25b9abbce837443558fede133cd330a8e77c74b35a5b29449ad5"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 23:45:50 crc kubenswrapper[5034]: I0105 23:45:50.470839 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://caec8cb16ece25b9abbce837443558fede133cd330a8e77c74b35a5b29449ad5" gracePeriod=600 Jan 05 23:45:51 crc kubenswrapper[5034]: I0105 23:45:51.169193 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="caec8cb16ece25b9abbce837443558fede133cd330a8e77c74b35a5b29449ad5" exitCode=0 Jan 05 23:45:51 crc kubenswrapper[5034]: I0105 23:45:51.169278 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"caec8cb16ece25b9abbce837443558fede133cd330a8e77c74b35a5b29449ad5"} Jan 05 23:45:51 crc kubenswrapper[5034]: I0105 23:45:51.169827 5034 scope.go:117] "RemoveContainer" containerID="7903ff6439052ff34dcfc93ae2e8967cc670fa257fe4d757fd32f22a1ab69e16" Jan 05 23:45:51 crc kubenswrapper[5034]: I0105 23:45:51.170106 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerStarted","Data":"d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5"} Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.375845 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h7ncv"] Jan 05 23:46:14 crc kubenswrapper[5034]: E0105 23:46:14.377032 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fa99ca-d398-4a54-992d-b46493359c2e" containerName="collect-profiles" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.377050 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fa99ca-d398-4a54-992d-b46493359c2e" containerName="collect-profiles" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.378544 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="04fa99ca-d398-4a54-992d-b46493359c2e" containerName="collect-profiles" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.380488 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.393170 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h7ncv"] Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.552991 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdcb\" (UniqueName: \"kubernetes.io/projected/1e292d5b-8eff-4f72-a19e-3563200f5723-kube-api-access-djdcb\") pod \"certified-operators-h7ncv\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.553199 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-utilities\") pod \"certified-operators-h7ncv\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.553259 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-catalog-content\") pod \"certified-operators-h7ncv\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.638022 5034 scope.go:117] "RemoveContainer" containerID="95aa58bc50c9fc9f6a343d379414e0c304f7a138b92dbea700d3d48d50801cc0" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.655590 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-utilities\") pod \"certified-operators-h7ncv\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.656090 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-catalog-content\") pod \"certified-operators-h7ncv\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.656122 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdcb\" (UniqueName: \"kubernetes.io/projected/1e292d5b-8eff-4f72-a19e-3563200f5723-kube-api-access-djdcb\") pod \"certified-operators-h7ncv\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.655990 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-utilities\") pod \"certified-operators-h7ncv\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.656977 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-catalog-content\") pod \"certified-operators-h7ncv\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.683032 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdcb\" (UniqueName: \"kubernetes.io/projected/1e292d5b-8eff-4f72-a19e-3563200f5723-kube-api-access-djdcb\") pod \"certified-operators-h7ncv\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:14 crc kubenswrapper[5034]: I0105 23:46:14.706230 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:16 crc kubenswrapper[5034]: I0105 23:46:16.200586 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h7ncv"] Jan 05 23:46:16 crc kubenswrapper[5034]: I0105 23:46:16.655660 5034 generic.go:334] "Generic (PLEG): container finished" podID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerID="c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff" exitCode=0 Jan 05 23:46:16 crc kubenswrapper[5034]: I0105 23:46:16.656506 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7ncv" event={"ID":"1e292d5b-8eff-4f72-a19e-3563200f5723","Type":"ContainerDied","Data":"c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff"} Jan 05 23:46:16 crc kubenswrapper[5034]: I0105 23:46:16.656546 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7ncv" event={"ID":"1e292d5b-8eff-4f72-a19e-3563200f5723","Type":"ContainerStarted","Data":"5f9ee15f9c313ef4fa3ae87014126e397db762e69a481eff32b03275a3b96b89"} Jan 05 23:46:17 crc kubenswrapper[5034]: I0105 23:46:17.667993 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7ncv" event={"ID":"1e292d5b-8eff-4f72-a19e-3563200f5723","Type":"ContainerStarted","Data":"148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4"} Jan 05 23:46:18 crc kubenswrapper[5034]: I0105 23:46:18.679866 5034 generic.go:334] "Generic (PLEG): container finished" podID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerID="148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4" exitCode=0 Jan 05 23:46:18 crc kubenswrapper[5034]: I0105 23:46:18.679921 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7ncv" event={"ID":"1e292d5b-8eff-4f72-a19e-3563200f5723","Type":"ContainerDied","Data":"148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4"} Jan 05 23:46:19 crc kubenswrapper[5034]: I0105 23:46:19.690427 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7ncv" event={"ID":"1e292d5b-8eff-4f72-a19e-3563200f5723","Type":"ContainerStarted","Data":"68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6"} Jan 05 23:46:19 crc kubenswrapper[5034]: I0105 23:46:19.707792 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h7ncv" podStartSLOduration=3.322887834 podStartE2EDuration="5.707767409s" podCreationTimestamp="2026-01-05 23:46:14 +0000 UTC" firstStartedPulling="2026-01-05 23:46:16.657744861 +0000 UTC m=+6869.029744300" lastFinishedPulling="2026-01-05 23:46:19.042624436 +0000 UTC m=+6871.414623875" observedRunningTime="2026-01-05 23:46:19.706504993 +0000 UTC m=+6872.078504432" watchObservedRunningTime="2026-01-05 23:46:19.707767409 +0000 UTC m=+6872.079766848" Jan 05 23:46:24 crc kubenswrapper[5034]: I0105 23:46:24.706746 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:24 crc kubenswrapper[5034]: I0105 23:46:24.709535 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:24 crc kubenswrapper[5034]: I0105 23:46:24.771714 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:25 crc kubenswrapper[5034]: I0105 23:46:25.791522 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:25 crc kubenswrapper[5034]: I0105 23:46:25.850118 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h7ncv"] Jan 05 23:46:27 crc kubenswrapper[5034]: I0105 23:46:27.759772 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h7ncv" podUID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerName="registry-server" containerID="cri-o://68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6" gracePeriod=2 Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.257748 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.326977 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djdcb\" (UniqueName: \"kubernetes.io/projected/1e292d5b-8eff-4f72-a19e-3563200f5723-kube-api-access-djdcb\") pod \"1e292d5b-8eff-4f72-a19e-3563200f5723\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.327040 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-catalog-content\") pod \"1e292d5b-8eff-4f72-a19e-3563200f5723\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.327325 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-utilities\") pod \"1e292d5b-8eff-4f72-a19e-3563200f5723\" (UID: \"1e292d5b-8eff-4f72-a19e-3563200f5723\") " Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.328162 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-utilities" (OuterVolumeSpecName: "utilities") pod "1e292d5b-8eff-4f72-a19e-3563200f5723" (UID: "1e292d5b-8eff-4f72-a19e-3563200f5723"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.328751 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.344183 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e292d5b-8eff-4f72-a19e-3563200f5723-kube-api-access-djdcb" (OuterVolumeSpecName: "kube-api-access-djdcb") pod "1e292d5b-8eff-4f72-a19e-3563200f5723" (UID: "1e292d5b-8eff-4f72-a19e-3563200f5723"). InnerVolumeSpecName "kube-api-access-djdcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.410564 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e292d5b-8eff-4f72-a19e-3563200f5723" (UID: "1e292d5b-8eff-4f72-a19e-3563200f5723"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.430835 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djdcb\" (UniqueName: \"kubernetes.io/projected/1e292d5b-8eff-4f72-a19e-3563200f5723-kube-api-access-djdcb\") on node \"crc\" DevicePath \"\"" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.430869 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e292d5b-8eff-4f72-a19e-3563200f5723-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.769247 5034 generic.go:334] "Generic (PLEG): container finished" podID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerID="68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6" exitCode=0 Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.769299 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7ncv" event={"ID":"1e292d5b-8eff-4f72-a19e-3563200f5723","Type":"ContainerDied","Data":"68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6"} Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.769339 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h7ncv" event={"ID":"1e292d5b-8eff-4f72-a19e-3563200f5723","Type":"ContainerDied","Data":"5f9ee15f9c313ef4fa3ae87014126e397db762e69a481eff32b03275a3b96b89"} Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.769337 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h7ncv" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.769361 5034 scope.go:117] "RemoveContainer" containerID="68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.789251 5034 scope.go:117] "RemoveContainer" containerID="148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.814431 5034 scope.go:117] "RemoveContainer" containerID="c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.836243 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h7ncv"] Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.846008 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h7ncv"] Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.890634 5034 scope.go:117] "RemoveContainer" containerID="68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6" Jan 05 23:46:28 crc kubenswrapper[5034]: E0105 23:46:28.891344 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6\": container with ID starting with 68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6 not found: ID does not exist" containerID="68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.891399 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6"} err="failed to get container status \"68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6\": rpc error: code = NotFound desc = could not find container \"68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6\": container with ID starting with 68545ff2f1bcdcbf104a96d45a22c7a4cf28cf4e1de8858008ae3c8bf35726f6 not found: ID does not exist" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.891431 5034 scope.go:117] "RemoveContainer" containerID="148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4" Jan 05 23:46:28 crc kubenswrapper[5034]: E0105 23:46:28.891653 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4\": container with ID starting with 148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4 not found: ID does not exist" containerID="148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.891673 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4"} err="failed to get container status \"148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4\": rpc error: code = NotFound desc = could not find container \"148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4\": container with ID starting with 148e30f5566f7214a35865f54ba97d6a09aceefbdb7da0b1f4fbf698686473f4 not found: ID does not exist" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.891689 5034 scope.go:117] "RemoveContainer" containerID="c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff" Jan 05 23:46:28 crc kubenswrapper[5034]: E0105 23:46:28.891863 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff\": container with ID starting with c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff not found: ID does not exist" containerID="c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff" Jan 05 23:46:28 crc kubenswrapper[5034]: I0105 23:46:28.891881 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff"} err="failed to get container status \"c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff\": rpc error: code = NotFound desc = could not find container \"c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff\": container with ID starting with c6e9021c59c4763c623ca9ba1cdce36486f6b4958225df0b5c9bbdcaca2851ff not found: ID does not exist" Jan 05 23:46:29 crc kubenswrapper[5034]: I0105 23:46:29.853015 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e292d5b-8eff-4f72-a19e-3563200f5723" path="/var/lib/kubelet/pods/1e292d5b-8eff-4f72-a19e-3563200f5723/volumes" Jan 05 23:46:53 crc kubenswrapper[5034]: I0105 23:46:53.008276 5034 generic.go:334] "Generic (PLEG): container finished" podID="51f20dee-93fc-4732-a939-b64019e28734" containerID="35d5ec16103f17584659229f3a1346461bfc671a14431c343ae33a0e20767c80" exitCode=0 Jan 05 23:46:53 crc kubenswrapper[5034]: I0105 23:46:53.008352 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5pvlv/must-gather-tqvqv" event={"ID":"51f20dee-93fc-4732-a939-b64019e28734","Type":"ContainerDied","Data":"35d5ec16103f17584659229f3a1346461bfc671a14431c343ae33a0e20767c80"} Jan 05 23:46:53 crc kubenswrapper[5034]: I0105 23:46:53.010049 5034 scope.go:117] "RemoveContainer" containerID="35d5ec16103f17584659229f3a1346461bfc671a14431c343ae33a0e20767c80" Jan 05 23:46:53 crc kubenswrapper[5034]: I0105 23:46:53.263299 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5pvlv_must-gather-tqvqv_51f20dee-93fc-4732-a939-b64019e28734/gather/0.log" Jan 05 23:47:01 crc kubenswrapper[5034]: I0105 23:47:01.676028 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5pvlv/must-gather-tqvqv"] Jan 05 23:47:01 crc kubenswrapper[5034]: I0105 23:47:01.677039 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5pvlv/must-gather-tqvqv" podUID="51f20dee-93fc-4732-a939-b64019e28734" containerName="copy" containerID="cri-o://39b861ec63e08de25cc2a99e9ccac04ac3ec6e4eabe1686009f225be408c52ff" gracePeriod=2 Jan 05 23:47:01 crc kubenswrapper[5034]: I0105 23:47:01.689455 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5pvlv/must-gather-tqvqv"] Jan 05 23:47:02 crc kubenswrapper[5034]: I0105 23:47:02.100358 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5pvlv_must-gather-tqvqv_51f20dee-93fc-4732-a939-b64019e28734/copy/0.log" Jan 05 23:47:02 crc kubenswrapper[5034]: I0105 23:47:02.101093 5034 generic.go:334] "Generic (PLEG): container finished" podID="51f20dee-93fc-4732-a939-b64019e28734" containerID="39b861ec63e08de25cc2a99e9ccac04ac3ec6e4eabe1686009f225be408c52ff" exitCode=143 Jan 05 23:47:02 crc kubenswrapper[5034]: I0105 23:47:02.294992 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5pvlv_must-gather-tqvqv_51f20dee-93fc-4732-a939-b64019e28734/copy/0.log" Jan 05 23:47:02 crc kubenswrapper[5034]: I0105 23:47:02.296480 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/must-gather-tqvqv" Jan 05 23:47:02 crc kubenswrapper[5034]: I0105 23:47:02.435381 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wr9r\" (UniqueName: \"kubernetes.io/projected/51f20dee-93fc-4732-a939-b64019e28734-kube-api-access-7wr9r\") pod \"51f20dee-93fc-4732-a939-b64019e28734\" (UID: \"51f20dee-93fc-4732-a939-b64019e28734\") " Jan 05 23:47:02 crc kubenswrapper[5034]: I0105 23:47:02.435919 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51f20dee-93fc-4732-a939-b64019e28734-must-gather-output\") pod \"51f20dee-93fc-4732-a939-b64019e28734\" (UID: \"51f20dee-93fc-4732-a939-b64019e28734\") " Jan 05 23:47:02 crc kubenswrapper[5034]: I0105 23:47:02.442736 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f20dee-93fc-4732-a939-b64019e28734-kube-api-access-7wr9r" (OuterVolumeSpecName: "kube-api-access-7wr9r") pod "51f20dee-93fc-4732-a939-b64019e28734" (UID: "51f20dee-93fc-4732-a939-b64019e28734"). InnerVolumeSpecName "kube-api-access-7wr9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:47:02 crc kubenswrapper[5034]: I0105 23:47:02.541118 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wr9r\" (UniqueName: \"kubernetes.io/projected/51f20dee-93fc-4732-a939-b64019e28734-kube-api-access-7wr9r\") on node \"crc\" DevicePath \"\"" Jan 05 23:47:02 crc kubenswrapper[5034]: I0105 23:47:02.610391 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f20dee-93fc-4732-a939-b64019e28734-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "51f20dee-93fc-4732-a939-b64019e28734" (UID: "51f20dee-93fc-4732-a939-b64019e28734"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:47:02 crc kubenswrapper[5034]: I0105 23:47:02.643847 5034 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51f20dee-93fc-4732-a939-b64019e28734-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 05 23:47:03 crc kubenswrapper[5034]: I0105 23:47:03.114964 5034 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5pvlv_must-gather-tqvqv_51f20dee-93fc-4732-a939-b64019e28734/copy/0.log" Jan 05 23:47:03 crc kubenswrapper[5034]: I0105 23:47:03.116881 5034 scope.go:117] "RemoveContainer" containerID="39b861ec63e08de25cc2a99e9ccac04ac3ec6e4eabe1686009f225be408c52ff" Jan 05 23:47:03 crc kubenswrapper[5034]: I0105 23:47:03.116901 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5pvlv/must-gather-tqvqv" Jan 05 23:47:03 crc kubenswrapper[5034]: I0105 23:47:03.162853 5034 scope.go:117] "RemoveContainer" containerID="35d5ec16103f17584659229f3a1346461bfc671a14431c343ae33a0e20767c80" Jan 05 23:47:03 crc kubenswrapper[5034]: I0105 23:47:03.851030 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f20dee-93fc-4732-a939-b64019e28734" path="/var/lib/kubelet/pods/51f20dee-93fc-4732-a939-b64019e28734/volumes" Jan 05 23:47:50 crc kubenswrapper[5034]: I0105 23:47:50.469003 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:47:50 crc kubenswrapper[5034]: I0105 23:47:50.469554 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:47:55 crc kubenswrapper[5034]: I0105 23:47:55.043046 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-vmrvq"] Jan 05 23:47:55 crc kubenswrapper[5034]: I0105 23:47:55.053341 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-9a4c-account-create-update-89d7j"] Jan 05 23:47:55 crc kubenswrapper[5034]: I0105 23:47:55.062719 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-vmrvq"] Jan 05 23:47:55 crc kubenswrapper[5034]: I0105 23:47:55.074599 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-9a4c-account-create-update-89d7j"] Jan 05 23:47:55 crc kubenswrapper[5034]: I0105 23:47:55.852106 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6254af-35d4-4259-869d-194ae72e9c8a" path="/var/lib/kubelet/pods/6b6254af-35d4-4259-869d-194ae72e9c8a/volumes" Jan 05 23:47:55 crc kubenswrapper[5034]: I0105 23:47:55.853322 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1bcc6da-3adc-495c-b32c-c328ecd78165" path="/var/lib/kubelet/pods/c1bcc6da-3adc-495c-b32c-c328ecd78165/volumes" Jan 05 23:48:11 crc kubenswrapper[5034]: I0105 23:48:11.030866 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-8d82c"] Jan 05 23:48:11 crc kubenswrapper[5034]: I0105 23:48:11.045646 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-8d82c"] Jan 05 23:48:11 crc kubenswrapper[5034]: I0105 23:48:11.850538 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b334f9-01bc-4ec5-a98e-a65e946939c9" path="/var/lib/kubelet/pods/90b334f9-01bc-4ec5-a98e-a65e946939c9/volumes" Jan 05 23:48:14 crc kubenswrapper[5034]: I0105 23:48:14.783708 5034 scope.go:117] "RemoveContainer" containerID="02a8ba172b626ca036caa14c60a73cdfcc37fca184afaf3f28e2d30be8a53d3b" Jan 05 23:48:14 crc kubenswrapper[5034]: I0105 23:48:14.815564 5034 scope.go:117] "RemoveContainer" containerID="3bd66723a4b603a86e035531aede708747bab3bcac3902b1c5ba5d0280dd1005" Jan 05 23:48:14 crc kubenswrapper[5034]: I0105 23:48:14.870202 5034 scope.go:117] "RemoveContainer" containerID="e9a9e8e585ac8f4c8dbd8822d7ffdc33e234297ba9f5341e2209d955ce5f8e1e" Jan 05 23:48:14 crc kubenswrapper[5034]: I0105 23:48:14.920966 5034 scope.go:117] "RemoveContainer" containerID="31049c118ac1833c95aa687e3b52d15d9eec277676d5bdbb5b2b36c120e4c4be" Jan 05 23:48:15 crc kubenswrapper[5034]: I0105 23:48:15.004119 5034 scope.go:117] "RemoveContainer" containerID="3ccadc54da0f3e5b4e3dec95722672b4e917a815cd3412218b05c1e964987b52" Jan 05 23:48:20 crc kubenswrapper[5034]: I0105 23:48:20.468776 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:48:20 crc kubenswrapper[5034]: I0105 23:48:20.469441 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:48:50 crc kubenswrapper[5034]: I0105 23:48:50.469444 5034 patch_prober.go:28] interesting pod/machine-config-daemon-frlwc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 23:48:50 crc kubenswrapper[5034]: I0105 23:48:50.471296 5034 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 23:48:50 crc kubenswrapper[5034]: I0105 23:48:50.471464 5034 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" Jan 05 23:48:50 crc kubenswrapper[5034]: I0105 23:48:50.472274 5034 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5"} pod="openshift-machine-config-operator/machine-config-daemon-frlwc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 23:48:50 crc kubenswrapper[5034]: I0105 23:48:50.472588 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" containerName="machine-config-daemon" containerID="cri-o://d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5" gracePeriod=600 Jan 05 23:48:50 crc kubenswrapper[5034]: E0105 23:48:50.592071 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:48:51 crc kubenswrapper[5034]: I0105 23:48:51.215058 5034 generic.go:334] "Generic (PLEG): container finished" podID="bdd89329-d259-499c-bfe9-747d547d10f6" containerID="d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5" exitCode=0 Jan 05 23:48:51 crc kubenswrapper[5034]: I0105 23:48:51.215153 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" event={"ID":"bdd89329-d259-499c-bfe9-747d547d10f6","Type":"ContainerDied","Data":"d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5"} Jan 05 23:48:51 crc kubenswrapper[5034]: I0105 23:48:51.215643 5034 scope.go:117] "RemoveContainer" containerID="caec8cb16ece25b9abbce837443558fede133cd330a8e77c74b35a5b29449ad5" Jan 05 23:48:51 crc kubenswrapper[5034]: I0105 23:48:51.216901 5034 scope.go:117] "RemoveContainer" containerID="d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5" Jan 05 23:48:51 crc kubenswrapper[5034]: E0105 23:48:51.217386 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:49:02 crc kubenswrapper[5034]: I0105 23:49:02.838892 5034 scope.go:117] "RemoveContainer" containerID="d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5" Jan 05 23:49:02 crc kubenswrapper[5034]: E0105 23:49:02.840398 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:49:13 crc kubenswrapper[5034]: I0105 23:49:13.839043 5034 scope.go:117] "RemoveContainer" containerID="d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5" Jan 05 23:49:13 crc kubenswrapper[5034]: E0105 23:49:13.840066 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.056354 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n2s8z"] Jan 05 23:49:15 crc kubenswrapper[5034]: E0105 23:49:15.057443 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f20dee-93fc-4732-a939-b64019e28734" containerName="gather" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.057462 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f20dee-93fc-4732-a939-b64019e28734" containerName="gather" Jan 05 23:49:15 crc kubenswrapper[5034]: E0105 23:49:15.057477 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerName="extract-utilities" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.057483 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerName="extract-utilities" Jan 05 23:49:15 crc kubenswrapper[5034]: E0105 23:49:15.057494 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerName="extract-content" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.057501 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerName="extract-content" Jan 05 23:49:15 crc kubenswrapper[5034]: E0105 23:49:15.057521 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerName="registry-server" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.057528 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerName="registry-server" Jan 05 23:49:15 crc kubenswrapper[5034]: E0105 23:49:15.057552 5034 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f20dee-93fc-4732-a939-b64019e28734" containerName="copy" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.057557 5034 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f20dee-93fc-4732-a939-b64019e28734" containerName="copy" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.057753 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f20dee-93fc-4732-a939-b64019e28734" containerName="gather" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.057767 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e292d5b-8eff-4f72-a19e-3563200f5723" containerName="registry-server" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.057791 5034 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f20dee-93fc-4732-a939-b64019e28734" containerName="copy" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.059498 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.109794 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n2s8z"] Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.143968 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlqf5\" (UniqueName: \"kubernetes.io/projected/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-kube-api-access-dlqf5\") pod \"redhat-operators-n2s8z\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.144158 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-utilities\") pod \"redhat-operators-n2s8z\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.144200 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-catalog-content\") pod \"redhat-operators-n2s8z\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.246317 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlqf5\" (UniqueName: \"kubernetes.io/projected/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-kube-api-access-dlqf5\") pod \"redhat-operators-n2s8z\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.246453 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-utilities\") pod \"redhat-operators-n2s8z\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.246485 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-catalog-content\") pod \"redhat-operators-n2s8z\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.247105 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-utilities\") pod \"redhat-operators-n2s8z\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.247473 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-catalog-content\") pod \"redhat-operators-n2s8z\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.274411 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlqf5\" (UniqueName: \"kubernetes.io/projected/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-kube-api-access-dlqf5\") pod \"redhat-operators-n2s8z\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:15 crc kubenswrapper[5034]: I0105 23:49:15.624326 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:16 crc kubenswrapper[5034]: I0105 23:49:16.141534 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n2s8z"] Jan 05 23:49:16 crc kubenswrapper[5034]: I0105 23:49:16.655922 5034 generic.go:334] "Generic (PLEG): container finished" podID="8f3ea67d-8b85-4c29-a989-7440b9e80a6a" containerID="e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d" exitCode=0 Jan 05 23:49:16 crc kubenswrapper[5034]: I0105 23:49:16.656022 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2s8z" event={"ID":"8f3ea67d-8b85-4c29-a989-7440b9e80a6a","Type":"ContainerDied","Data":"e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d"} Jan 05 23:49:16 crc kubenswrapper[5034]: I0105 23:49:16.656658 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2s8z" event={"ID":"8f3ea67d-8b85-4c29-a989-7440b9e80a6a","Type":"ContainerStarted","Data":"10aa7b47691e84c8e7b1681224791b926ec56d6991ea25c4d09ecd550cf9a6ba"} Jan 05 23:49:16 crc kubenswrapper[5034]: I0105 23:49:16.658489 5034 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 23:49:17 crc kubenswrapper[5034]: I0105 23:49:17.668589 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2s8z" event={"ID":"8f3ea67d-8b85-4c29-a989-7440b9e80a6a","Type":"ContainerStarted","Data":"36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1"} Jan 05 23:49:19 crc kubenswrapper[5034]: I0105 23:49:19.850773 5034 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wmd75"] Jan 05 23:49:19 crc kubenswrapper[5034]: I0105 23:49:19.855377 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:19 crc kubenswrapper[5034]: I0105 23:49:19.856193 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmd75"] Jan 05 23:49:19 crc kubenswrapper[5034]: I0105 23:49:19.942500 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xssb2\" (UniqueName: \"kubernetes.io/projected/904a9b0b-b47b-4d8f-b869-02252880a24b-kube-api-access-xssb2\") pod \"community-operators-wmd75\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:19 crc kubenswrapper[5034]: I0105 23:49:19.942574 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-catalog-content\") pod \"community-operators-wmd75\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:19 crc kubenswrapper[5034]: I0105 23:49:19.943398 5034 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-utilities\") pod \"community-operators-wmd75\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:20 crc kubenswrapper[5034]: I0105 23:49:20.046072 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-utilities\") pod \"community-operators-wmd75\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:20 crc kubenswrapper[5034]: I0105 23:49:20.046196 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xssb2\" (UniqueName: \"kubernetes.io/projected/904a9b0b-b47b-4d8f-b869-02252880a24b-kube-api-access-xssb2\") pod \"community-operators-wmd75\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:20 crc kubenswrapper[5034]: I0105 23:49:20.046228 5034 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-catalog-content\") pod \"community-operators-wmd75\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:20 crc kubenswrapper[5034]: I0105 23:49:20.046743 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-utilities\") pod \"community-operators-wmd75\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:20 crc kubenswrapper[5034]: I0105 23:49:20.046794 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-catalog-content\") pod \"community-operators-wmd75\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:20 crc kubenswrapper[5034]: I0105 23:49:20.069441 5034 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xssb2\" (UniqueName: \"kubernetes.io/projected/904a9b0b-b47b-4d8f-b869-02252880a24b-kube-api-access-xssb2\") pod \"community-operators-wmd75\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:20 crc kubenswrapper[5034]: I0105 23:49:20.205585 5034 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:20 crc kubenswrapper[5034]: I0105 23:49:20.723317 5034 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmd75"] Jan 05 23:49:21 crc kubenswrapper[5034]: I0105 23:49:21.703007 5034 generic.go:334] "Generic (PLEG): container finished" podID="904a9b0b-b47b-4d8f-b869-02252880a24b" containerID="214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca" exitCode=0 Jan 05 23:49:21 crc kubenswrapper[5034]: I0105 23:49:21.703072 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmd75" event={"ID":"904a9b0b-b47b-4d8f-b869-02252880a24b","Type":"ContainerDied","Data":"214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca"} Jan 05 23:49:21 crc kubenswrapper[5034]: I0105 23:49:21.704545 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmd75" event={"ID":"904a9b0b-b47b-4d8f-b869-02252880a24b","Type":"ContainerStarted","Data":"4a5335587de3bacf2c090c1190ad2c14bbc051256898bd067103455bcdae9908"} Jan 05 23:49:21 crc kubenswrapper[5034]: I0105 23:49:21.710039 5034 generic.go:334] "Generic (PLEG): container finished" podID="8f3ea67d-8b85-4c29-a989-7440b9e80a6a" containerID="36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1" exitCode=0 Jan 05 23:49:21 crc kubenswrapper[5034]: I0105 23:49:21.710095 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2s8z" event={"ID":"8f3ea67d-8b85-4c29-a989-7440b9e80a6a","Type":"ContainerDied","Data":"36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1"} Jan 05 23:49:22 crc kubenswrapper[5034]: I0105 23:49:22.732598 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2s8z" event={"ID":"8f3ea67d-8b85-4c29-a989-7440b9e80a6a","Type":"ContainerStarted","Data":"df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e"} Jan 05 23:49:22 crc kubenswrapper[5034]: I0105 23:49:22.766893 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n2s8z" podStartSLOduration=2.246308125 podStartE2EDuration="7.766868791s" podCreationTimestamp="2026-01-05 23:49:15 +0000 UTC" firstStartedPulling="2026-01-05 23:49:16.658239719 +0000 UTC m=+7049.030239148" lastFinishedPulling="2026-01-05 23:49:22.178800365 +0000 UTC m=+7054.550799814" observedRunningTime="2026-01-05 23:49:22.764625107 +0000 UTC m=+7055.136624556" watchObservedRunningTime="2026-01-05 23:49:22.766868791 +0000 UTC m=+7055.138868230" Jan 05 23:49:23 crc kubenswrapper[5034]: I0105 23:49:23.746307 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmd75" event={"ID":"904a9b0b-b47b-4d8f-b869-02252880a24b","Type":"ContainerStarted","Data":"f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09"} Jan 05 23:49:24 crc kubenswrapper[5034]: I0105 23:49:24.757470 5034 generic.go:334] "Generic (PLEG): container finished" podID="904a9b0b-b47b-4d8f-b869-02252880a24b" containerID="f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09" exitCode=0 Jan 05 23:49:24 crc kubenswrapper[5034]: I0105 23:49:24.757808 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmd75" event={"ID":"904a9b0b-b47b-4d8f-b869-02252880a24b","Type":"ContainerDied","Data":"f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09"} Jan 05 23:49:25 crc kubenswrapper[5034]: I0105 23:49:25.626061 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:25 crc kubenswrapper[5034]: I0105 23:49:25.626426 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:25 crc kubenswrapper[5034]: I0105 23:49:25.768806 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmd75" event={"ID":"904a9b0b-b47b-4d8f-b869-02252880a24b","Type":"ContainerStarted","Data":"f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903"} Jan 05 23:49:25 crc kubenswrapper[5034]: I0105 23:49:25.788878 5034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wmd75" podStartSLOduration=3.349928723 podStartE2EDuration="6.78885882s" podCreationTimestamp="2026-01-05 23:49:19 +0000 UTC" firstStartedPulling="2026-01-05 23:49:21.706814317 +0000 UTC m=+7054.078813756" lastFinishedPulling="2026-01-05 23:49:25.145744414 +0000 UTC m=+7057.517743853" observedRunningTime="2026-01-05 23:49:25.787204203 +0000 UTC m=+7058.159203662" watchObservedRunningTime="2026-01-05 23:49:25.78885882 +0000 UTC m=+7058.160858259" Jan 05 23:49:25 crc kubenswrapper[5034]: I0105 23:49:25.838568 5034 scope.go:117] "RemoveContainer" containerID="d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5" Jan 05 23:49:25 crc kubenswrapper[5034]: E0105 23:49:25.838827 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:49:26 crc kubenswrapper[5034]: I0105 23:49:26.676371 5034 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n2s8z" podUID="8f3ea67d-8b85-4c29-a989-7440b9e80a6a" containerName="registry-server" probeResult="failure" output=< Jan 05 23:49:26 crc kubenswrapper[5034]: timeout: failed to connect service ":50051" within 1s Jan 05 23:49:26 crc kubenswrapper[5034]: > Jan 05 23:49:30 crc kubenswrapper[5034]: I0105 23:49:30.205660 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:30 crc kubenswrapper[5034]: I0105 23:49:30.207106 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:30 crc kubenswrapper[5034]: I0105 23:49:30.256650 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:30 crc kubenswrapper[5034]: I0105 23:49:30.880733 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:30 crc kubenswrapper[5034]: I0105 23:49:30.939299 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmd75"] Jan 05 23:49:32 crc kubenswrapper[5034]: I0105 23:49:32.840161 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wmd75" podUID="904a9b0b-b47b-4d8f-b869-02252880a24b" containerName="registry-server" containerID="cri-o://f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903" gracePeriod=2 Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.314338 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.452473 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xssb2\" (UniqueName: \"kubernetes.io/projected/904a9b0b-b47b-4d8f-b869-02252880a24b-kube-api-access-xssb2\") pod \"904a9b0b-b47b-4d8f-b869-02252880a24b\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.452725 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-catalog-content\") pod \"904a9b0b-b47b-4d8f-b869-02252880a24b\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.452867 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-utilities\") pod \"904a9b0b-b47b-4d8f-b869-02252880a24b\" (UID: \"904a9b0b-b47b-4d8f-b869-02252880a24b\") " Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.453829 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-utilities" (OuterVolumeSpecName: "utilities") pod "904a9b0b-b47b-4d8f-b869-02252880a24b" (UID: "904a9b0b-b47b-4d8f-b869-02252880a24b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.458985 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904a9b0b-b47b-4d8f-b869-02252880a24b-kube-api-access-xssb2" (OuterVolumeSpecName: "kube-api-access-xssb2") pod "904a9b0b-b47b-4d8f-b869-02252880a24b" (UID: "904a9b0b-b47b-4d8f-b869-02252880a24b"). InnerVolumeSpecName "kube-api-access-xssb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.502333 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "904a9b0b-b47b-4d8f-b869-02252880a24b" (UID: "904a9b0b-b47b-4d8f-b869-02252880a24b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.555272 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.555305 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904a9b0b-b47b-4d8f-b869-02252880a24b-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.555315 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xssb2\" (UniqueName: \"kubernetes.io/projected/904a9b0b-b47b-4d8f-b869-02252880a24b-kube-api-access-xssb2\") on node \"crc\" DevicePath \"\"" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.853024 5034 generic.go:334] "Generic (PLEG): container finished" podID="904a9b0b-b47b-4d8f-b869-02252880a24b" containerID="f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903" exitCode=0 Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.853098 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmd75" event={"ID":"904a9b0b-b47b-4d8f-b869-02252880a24b","Type":"ContainerDied","Data":"f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903"} Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.853149 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmd75" event={"ID":"904a9b0b-b47b-4d8f-b869-02252880a24b","Type":"ContainerDied","Data":"4a5335587de3bacf2c090c1190ad2c14bbc051256898bd067103455bcdae9908"} Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.853176 5034 scope.go:117] "RemoveContainer" containerID="f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.853172 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmd75" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.887574 5034 scope.go:117] "RemoveContainer" containerID="f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.911492 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmd75"] Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.919012 5034 scope.go:117] "RemoveContainer" containerID="214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.920534 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wmd75"] Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.975723 5034 scope.go:117] "RemoveContainer" containerID="f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903" Jan 05 23:49:33 crc kubenswrapper[5034]: E0105 23:49:33.976250 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903\": container with ID starting with f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903 not found: ID does not exist" containerID="f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.976296 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903"} err="failed to get container status \"f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903\": rpc error: code = NotFound desc = could not find container \"f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903\": container with ID starting with f9212c4079ac74c1d99c23522d114c3617edf39c46e39fe1cb8fa88fbc606903 not found: ID does not exist" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.976331 5034 scope.go:117] "RemoveContainer" containerID="f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09" Jan 05 23:49:33 crc kubenswrapper[5034]: E0105 23:49:33.976679 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09\": container with ID starting with f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09 not found: ID does not exist" containerID="f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.976707 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09"} err="failed to get container status \"f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09\": rpc error: code = NotFound desc = could not find container \"f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09\": container with ID starting with f8ad27831a5fc8331e45c10713f4870e6ba6b72143dc44d48f1fb969a813be09 not found: ID does not exist" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.976723 5034 scope.go:117] "RemoveContainer" containerID="214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca" Jan 05 23:49:33 crc kubenswrapper[5034]: E0105 23:49:33.977238 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca\": container with ID starting with 214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca not found: ID does not exist" containerID="214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca" Jan 05 23:49:33 crc kubenswrapper[5034]: I0105 23:49:33.977267 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca"} err="failed to get container status \"214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca\": rpc error: code = NotFound desc = could not find container \"214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca\": container with ID starting with 214b3b8ea609272ae76b802f1f11a5a647e39c96e511947d8f13d347a36231ca not found: ID does not exist" Jan 05 23:49:35 crc kubenswrapper[5034]: I0105 23:49:35.695256 5034 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:35 crc kubenswrapper[5034]: I0105 23:49:35.748875 5034 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:35 crc kubenswrapper[5034]: I0105 23:49:35.849231 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="904a9b0b-b47b-4d8f-b869-02252880a24b" path="/var/lib/kubelet/pods/904a9b0b-b47b-4d8f-b869-02252880a24b/volumes" Jan 05 23:49:36 crc kubenswrapper[5034]: I0105 23:49:36.895965 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n2s8z"] Jan 05 23:49:36 crc kubenswrapper[5034]: I0105 23:49:36.896557 5034 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n2s8z" podUID="8f3ea67d-8b85-4c29-a989-7440b9e80a6a" containerName="registry-server" containerID="cri-o://df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e" gracePeriod=2 Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.342195 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.387150 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlqf5\" (UniqueName: \"kubernetes.io/projected/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-kube-api-access-dlqf5\") pod \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.387286 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-catalog-content\") pod \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.387323 5034 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-utilities\") pod \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\" (UID: \"8f3ea67d-8b85-4c29-a989-7440b9e80a6a\") " Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.388392 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-utilities" (OuterVolumeSpecName: "utilities") pod "8f3ea67d-8b85-4c29-a989-7440b9e80a6a" (UID: "8f3ea67d-8b85-4c29-a989-7440b9e80a6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.404692 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-kube-api-access-dlqf5" (OuterVolumeSpecName: "kube-api-access-dlqf5") pod "8f3ea67d-8b85-4c29-a989-7440b9e80a6a" (UID: "8f3ea67d-8b85-4c29-a989-7440b9e80a6a"). InnerVolumeSpecName "kube-api-access-dlqf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.490368 5034 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlqf5\" (UniqueName: \"kubernetes.io/projected/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-kube-api-access-dlqf5\") on node \"crc\" DevicePath \"\"" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.490408 5034 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.494681 5034 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f3ea67d-8b85-4c29-a989-7440b9e80a6a" (UID: "8f3ea67d-8b85-4c29-a989-7440b9e80a6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.592649 5034 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f3ea67d-8b85-4c29-a989-7440b9e80a6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.894024 5034 generic.go:334] "Generic (PLEG): container finished" podID="8f3ea67d-8b85-4c29-a989-7440b9e80a6a" containerID="df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e" exitCode=0 Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.894095 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2s8z" event={"ID":"8f3ea67d-8b85-4c29-a989-7440b9e80a6a","Type":"ContainerDied","Data":"df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e"} Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.894137 5034 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2s8z" event={"ID":"8f3ea67d-8b85-4c29-a989-7440b9e80a6a","Type":"ContainerDied","Data":"10aa7b47691e84c8e7b1681224791b926ec56d6991ea25c4d09ecd550cf9a6ba"} Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.894141 5034 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2s8z" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.894160 5034 scope.go:117] "RemoveContainer" containerID="df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.922816 5034 scope.go:117] "RemoveContainer" containerID="36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.922884 5034 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n2s8z"] Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.935520 5034 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n2s8z"] Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.949616 5034 scope.go:117] "RemoveContainer" containerID="e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.994277 5034 scope.go:117] "RemoveContainer" containerID="df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e" Jan 05 23:49:37 crc kubenswrapper[5034]: E0105 23:49:37.994706 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e\": container with ID starting with df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e not found: ID does not exist" containerID="df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.994751 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e"} err="failed to get container status \"df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e\": rpc error: code = NotFound desc = could not find container \"df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e\": container with ID starting with df9db0af42476b205afec2ec2cc18bb4e6acd8974fd683c319ca8ea552a6a67e not found: ID does not exist" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.994780 5034 scope.go:117] "RemoveContainer" containerID="36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1" Jan 05 23:49:37 crc kubenswrapper[5034]: E0105 23:49:37.995120 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1\": container with ID starting with 36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1 not found: ID does not exist" containerID="36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.995142 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1"} err="failed to get container status \"36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1\": rpc error: code = NotFound desc = could not find container \"36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1\": container with ID starting with 36f1eed49cfe00278c568a40d0e1fe53cdd2880a64deaf2ac7424482a49b51d1 not found: ID does not exist" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.995156 5034 scope.go:117] "RemoveContainer" containerID="e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d" Jan 05 23:49:37 crc kubenswrapper[5034]: E0105 23:49:37.995482 5034 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d\": container with ID starting with e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d not found: ID does not exist" containerID="e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d" Jan 05 23:49:37 crc kubenswrapper[5034]: I0105 23:49:37.995543 5034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d"} err="failed to get container status \"e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d\": rpc error: code = NotFound desc = could not find container \"e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d\": container with ID starting with e4d764c288a6c8f6e2a907231b4df2704320ebd0b1ea4d4aae76c771bde9925d not found: ID does not exist" Jan 05 23:49:39 crc kubenswrapper[5034]: I0105 23:49:39.855505 5034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3ea67d-8b85-4c29-a989-7440b9e80a6a" path="/var/lib/kubelet/pods/8f3ea67d-8b85-4c29-a989-7440b9e80a6a/volumes" Jan 05 23:49:40 crc kubenswrapper[5034]: I0105 23:49:40.838507 5034 scope.go:117] "RemoveContainer" containerID="d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5" Jan 05 23:49:40 crc kubenswrapper[5034]: E0105 23:49:40.838855 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:49:51 crc kubenswrapper[5034]: I0105 23:49:51.838380 5034 scope.go:117] "RemoveContainer" containerID="d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5" Jan 05 23:49:51 crc kubenswrapper[5034]: E0105 23:49:51.839282 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6" Jan 05 23:50:06 crc kubenswrapper[5034]: I0105 23:50:06.839117 5034 scope.go:117] "RemoveContainer" containerID="d113d0f4f5d8803fdb9b44cbe0ae4e5b9fafedc98f0a28f3b97c4bf3121dc2b5" Jan 05 23:50:06 crc kubenswrapper[5034]: E0105 23:50:06.839934 5034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frlwc_openshift-machine-config-operator(bdd89329-d259-499c-bfe9-747d547d10f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-frlwc" podUID="bdd89329-d259-499c-bfe9-747d547d10f6"